Updates from: 04/17/2021 03:07:19
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
In the provided [custom policies](https://github.com/azure-ad-b2c/partner-integr
| {your_tenant_extensions_appid} | App ID of your tenant's storage application | 01234567-89ab-cdef-0123-456789abcdef | | {your_tenant_extensions_app_objectid} | Object ID of your tenant's storage application | 01234567-89ab-cdef-0123-456789abcdef | | {your_app_insights_instrumentation_key} | Instrumentation key of your app insights instance* | 01234567-89ab-cdef-0123-456789abcdef |
-| {your_ui_base_url} | Endpoint in your app service from where your UI files are served | https://yourapp.azurewebsites.net/B2CUI/GetUIPage |
-| {your_app_service_url} | URL of your app service | https://yourapp.azurewebsites.net |
+| {your_ui_base_url} | Endpoint in your app service from where your UI files are served | `https://yourapp.azurewebsites.net/B2CUI/GetUIPage` |
+| {your_app_service_url} | URL of your app service | `https://yourapp.azurewebsites.net` |
| {your-facebook-app-id} | App ID of the facebook app you configured for federation with Azure AD B2C | 000000000000000 | | {your-facebook-app-secret} | Name of the policy key you've saved facebook's app secret as | B2C_1A_FacebookAppSecret |
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-nevis.md
# Tutorial to configure Nevis with Azure Active Directory B2C for passwordless authentication
-In this sample tutorial, learn how to extend Azure AD B2C with [Nevis](https://www.nevis.net/solution/authentication-cloud) to enable passwordless authentication. Nevis provides a mobile-first, fully branded end-user experience with Nevis Access app to provide strong customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements.
+In this sample tutorial, learn how to extend Azure AD B2C with [Nevis](https://www.nevis.net/en/solution/authentication-cloud) to enable passwordless authentication. Nevis provides a mobile-first, fully branded end-user experience with Nevis Access app to provide strong customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements.
## Prerequisites
For additional information, review the following articles
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory Msal Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration.md
Both the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authe
- your application can enable incremental consent, and supporting Conditional Access is easier - you benefit from the innovation.
-**MSAL.NET is now the recommended auth library to use with the Microsoft identity platform**. No new features will be implemented on ADAL.NET. The efforts are focused on improving MSAL.
+**MSAL.NET or Microsoft.Identity.Web are now the recommended auth libraries to use with the Microsoft identity platform**. No new features will be implemented on ADAL.NET. The efforts are focused on improving MSAL.
This article describes the differences between the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET) and helps you migrate to MSAL.
+## Should you migrate to MSAL.NET or to Microsoft.Identity.Web
+
+Before digging in the details of MSAL.NET vs ADAL.NET, you might want to check if you want to use MSAL.NET or a higher-level abstraction like [Microsoft.Identity.Web](microsoft-identity-web.md)
+
+For details about the decision tree below, read [Should I use MSAL.NET only? or a higher level abstraction?](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Is-MSAL.NET-right-for-me%3F)
++ ## Differences between ADAL and MSAL apps In most cases you want to use MSAL.NET and the Microsoft identity platform, which is the latest generation of Microsoft authentication libraries. Using MSAL.NET, you acquire tokens for users signing-in to your application with Azure AD (work and school accounts), Microsoft (personal) accounts (MSA), or Azure AD B2C.
To use MSAL.NET you will need to add the [Microsoft.Identity.Client](https://www
### Scopes not resources
-ADAL.NET acquires tokens for *resources*, but MSAL.NET acquires tokens for *scopes*. A number of MSAL.NET AcquireToken overrides require a parameter called scopes(`IEnumerable<string> scopes`). This parameter is a simple list of strings that declare the desired permissions and resources that are requested. Well known scopes are the [Microsoft Graph's scopes](/graph/permissions-reference).
+ADAL.NET acquires tokens for *resources*, but MSAL.NET acquires tokens for *scopes*. A number of MSAL.NET AcquireToken overrides require a parameter called scopes(`IEnumerable<string> scopes`). This parameter is a simple list of strings that declare the desired permissions and resources that are requested. Well-known scopes are the [Microsoft Graph's scopes](https://docs.microsoft.com/graph/permissions-reference).
It's also possible in MSAL.NET to access v1.0 resources. See details in [Scopes for a v1.0 application](#scopes-for-a-web-api-accepting-v10-tokens).
active-directory Howto Device Identity Virtual Desktop Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-device-identity-virtual-desktop-infrastructure.md
When deploying non-persistent VDI, Microsoft recommends that IT administrators i
- For Windows down-level: - Implement **autoworkplacejoin /leave** command as part of logoff script. This command should be triggered in the context of the user and should be execute before the user has logged off completely and while there is still network connectivity. - For Windows current in a Federated environment (e.g. AD FS):
- - Implement **dsregcmd /join** as part of VM boot sequence.
+ - Implement **dsregcmd /join** as part of VM boot sequence/order and before user signs in.
- **DO NOT** execute dsregcmd /leave as part of VM shutdown/restart process. - Define and implement process for [managing stale devices](manage-stale-devices.md). - Once you have a strategy to identify your non-persistent Hybrid Azure AD joined devices (e.g. using computer display name prefix), you should be more aggressive on the clean-up of these devices to ensure your directory does not get consumed with lots of stale devices.
active-directory Hybrid Azuread Join Manual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/hybrid-azuread-join-manual.md
Previously updated : 05/14/2019 Last updated : 04/16/2021
The following script shows an example for using the cmdlet. In this script, `$aa
The `Initialize-ADSyncDomainJoinedComputerSync` cmdlet:
-* Uses the Active Directory PowerShell module and Azure Active Directory Domain Services (Azure AD DS) tools. These tools rely on Active Directory Web Services running on a domain controller. Active Directory Web Services is supported on domain controllers running Windows Server 2008 R2 and later.
+* Uses the Active Directory PowerShell module and Active Directory Domain Services (AD DS) tools. These tools rely on Active Directory Web Services running on a domain controller. Active Directory Web Services is supported on domain controllers running Windows Server 2008 R2 and later.
* Is only supported by the MSOnline PowerShell module version 1.1.166.0. To download this module, use [this link](https://www.powershellgallery.com/packages/MSOnline/1.1.166.0). * If the AD DS tools are not installed, `Initialize-ADSyncDomainJoinedComputerSync` will fail. You can install the AD DS tools through Server Manager under **Features** > **Remote Server Administration Tools** > **Role Administration Tools**.
active-directory Add Guest To Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/add-guest-to-role.md
Azure Active Directory (Azure AD) B2B collaboration users are added as guest users to the directory, and guest permissions in the directory are restricted by default. Your business may need some guest users to fill higher-privilege roles in your organization. To support defining higher-privilege roles, guest users can be added to any roles you desire, based on your organization's needs.
+If a directory role is assigned to a guest user, the guest user will be granted with additional permissions that come with the role, including basic read permissions. See [Azure AD built-in roles](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference).
+ ## Default role ![Screenshot showing the default directory role](./media/add-guest-to-role/default-role.png)
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
After you sign in to the Azure portal, you can create a new tenant for your orga
Your new tenant is created with the domain contoso.onmicrosoft.com.
+## Your user account in the new tenant
+
+When you create a new AAD tenant, you become the first user of that tenant. As the first user, you're automatically assigned the [Global Admin](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference#global-administrator) role. Check out your user account by navigating to the [**Users**](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/MsGraphUsers) page.
+
+By default, you're also listed as the [technical contact](https://docs.microsoft.com/microsoft-365/admin/manage/change-address-contact-and-more?view=o365-worldwide#what-do-these-fields-mean) for the tenant. Technical contact information is something you can change in [**Properties**](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Properties).
+ ## Clean up resources If you're not going to continue to use this application, you can delete the tenant using the following steps: -- Ensure that you are signed in to the directory that you want to delete through the **Directory + subscription** filter in the Azure portal, and switching to the target directory if needed.
+- Ensure that you're signed in to the directory that you want to delete through the **Directory + subscription** filter in the Azure portal. Switch to the target directory if needed.
- Select **Azure Active Directory**, and then on the **Contoso - Overview** page, select **Delete directory**. The tenant and its associated information is deleted.
active-directory Active Directory Users Assign Role Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md
For more information about the available Azure AD roles, see [Assigning administ
A common way to assign Azure AD roles to a user is on the **Assigned roles** page for a user. You can also configure the user eligibility to be elevated just-in-time into a role using Privileged Identity Management (PIM). For more information about how to use PIM, see [Privileged Identity Management](../privileged-identity-management/index.yml).
-If a directory role is assigned to a guest user, the guest user will be granted with additional permissions that come with the role, including basic read permissions. See [Azure AD built-in roles](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference).
- > [!Note] > If you have an Azure AD Premium P2 license plan and already use PIM, all role management tasks are performed in the [Privileged Identity Management experience](../roles/manage-roles-portal.md). This feature is currently limited to assigning only one role at a time. You can't currently select multiple roles and assign them to a user all at once. >
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/users-default-permissions.md
The set of default permissions received depends on whether the user is a native
**Area** | **Member user permissions** | **Default guest user permissions** | **Restricted guest user permissions (Preview)** | | - | -
-Users and contacts | <ul><li>Enumerate list of all users and contacts<li>Read all public properties of users and contacts</li><li>Invite guests<li>Change own password<li>Manage own mobile phone number<li>Manage own photo<li>Invalidate own refresh tokens</li></ul> | <ul><li>Read own properties<li>Read display name, email, sign in name, photo, user principal name, and user type properties of other users and contacts<li>Change own password<li>Search for another user by ObjectId (if allowed)<li>Read manager and direct report information of other users</li></ul> | <ul><li>Read own properties<li>Change own password</li></ul>
+Users and contacts | <ul><li>Enumerate list of all users and contacts<li>Read all public properties of users and contacts</li><li>Invite guests<li>Change own password<li>Manage own mobile phone number<li>Manage own photo<li>Invalidate own refresh tokens</li></ul> | <ul><li>Read own properties<li>Read display name, email, sign in name, photo, user principal name, and user type properties of other users and contacts<li>Change own password<li>Search for another user by ObjectId (if allowed)<li>Read manager and direct report information of other users</li></ul> | <ul><li>Read own properties<li>Change own password</li><li>Manage own mobile phone number</li></ul>
Groups | <ul><li>Create security groups<li>Create Microsoft 365 groups<li>Enumerate list of all groups<li>Read all properties of groups<li>Read non-hidden group memberships<li>Read hidden Microsoft 365 group memberships for joined group<li>Manage properties, ownership, and membership of groups the user owns<li>Add guests to owned groups<li>Manage dynamic membership settings<li>Delete owned groups<li>Restore owned Microsoft 365 groups</li></ul> | <ul><li>Read properties of non-hidden groups, including membership and ownership (even non-joined groups)<li>Read hidden Microsoft 365 group memberships for joined groups<li>Search for groups by Display Name or ObjectId (if allowed)</li></ul> | <ul><li>Read object id for joined groups<li>Read membership and ownership of joined groups in some Microsoft 365 apps (if allowed)</li></ul> Applications | <ul><li>Register (create) new application<li>Enumerate list of all applications<li>Read properties of registered and enterprise applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application password for user<li>Delete owned applications<li>Restore owned applications</li></ul> | <ul><li>Read properties of registered and enterprise applications</li></ul> | <ul><li>Read properties of registered and enterprise applications Devices</li></ul> | <ul><li>Enumerate list of all devices<li>Read all properties of devices<li>Manage all properties of owned devices</li></ul> | No permissions | No permissions
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
We fixed a bug where changes to the [HomeRealmDiscovery policy](../manage-apps/c
In March 2020, we've added these 51 new apps with Federation support to the app gallery:
-[Cisco AnyConnect](../saas-apps/cisco-anyconnect.md), [Zoho One China](../saas-apps/zoho-one-china-tutorial.md), [PlusPlus](https://test.plusplus.app/auth/login/azuread-outlook/), [Profit.co SAML App](../saas-apps/profitco-saml-app-tutorial.md), [iPoint Service Provider](../saas-apps/ipoint-service-provider-tutorial.md), [contexxt.ai SPHERE](https://contexxt-sphere.com/login), [Wisdom By Invictus](../saas-apps/wisdom-by-invictus-tutorial.md), [Flare Digital Signage](https://spark-dev.pixelnebula.com/login), [Logz.io - Cloud Observability for Engineers](../saas-apps/logzio-cloud-observability-for-engineers-tutorial.md), [SpectrumU](../saas-apps/spectrumu-tutorial.md), [BizzContact](https://bizzcontact.app/), [Elqano SSO](../saas-apps/elqano-sso-tutorial.md), [MarketSignShare](http://www.signshare.com/), [CrossKnowledge Learning Suite](../saas-apps/crossknowledge-learning-suite-tutorial.md), [Netvision Compas](../saas-apps/netvision-compas-tutorial.md), [FCM HUB](../saas-apps/fcm-hub-tutorial.md), [RIB )
+[Cisco AnyConnect](../saas-apps/cisco-anyconnect.md), [Zoho One China](../saas-apps/zoho-one-china-tutorial.md), [PlusPlus](https://test.plusplus.app/auth/login/azuread-outlook/), [Profit.co SAML App](../saas-apps/profitco-saml-app-tutorial.md), [iPoint Service Provider](../saas-apps/ipoint-service-provider-tutorial.md), [contexxt.ai SPHERE](https://contexxt-sphere.com/login), [Wisdom By Invictus](../saas-apps/wisdom-by-invictus-tutorial.md), [Flare Digital Signage](https://spark-dev.pixelnebula.com/login), [Logz.io - Cloud Observability for Engineers](../saas-apps/logzio-cloud-observability-for-engineers-tutorial.md), [SpectrumU](../saas-apps/spectrumu-tutorial.md), [BizzContact](https://www.bizzcontact.app/), [Elqano SSO](../saas-apps/elqano-sso-tutorial.md), [MarketSignShare](http://www.signshare.com/), [CrossKnowledge Learning Suite](../saas-apps/crossknowledge-learning-suite-tutorial.md), [Netvision Compas](../saas-apps/netvision-compas-tutorial.md), [FCM HUB](../saas-apps/fcm-hub-tutorial.md), [RIB )
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../develop/v2-howto-app-gallery-listing.md).
A hotfix roll-up package (build 4.4.1642.0) is available as of September 25, 201
For more information, see [Hotfix rollup package (build 4.4.1642.0) is available for Identity Manager 2016 Service Pack 1](https://support.microsoft.com/help/4021562). -+
active-directory How To Connect Sync Endpoint Api V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2.md
Microsoft has deployed a new endpoint (API) for Azure AD Connect that improves t
> [!NOTE] > Currently, the new endpoint does not have a configured group size limit for Microsoft 365 groups that are written back. This may have an effect on your Active Directory and sync cycle latencies. It is recommended to increase your group sizes incrementally.
+>[!NOTE]
+> The Azure AD Connect sync V2 endpoint API is currently only available in these Azure environments:
+> - Azure Commercial
+> - Azure China cloud
+> - Azure US Government cloud
+> It will not be made available in the Azure German cloud
+ ## PrerequisitesΓÇ» In order to use the new V2 endpoint, you will need to use [Azure AD Connect version 1.5.30.0](https://www.microsoft.com/download/details.aspx?id=47594) or later and follow the deployment steps provided below to enable the V2 endpoint for your Azure AD Connect server.
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
Please follow this link to read more about [auto upgrade](how-to-connect-install
## 1.6.4.0
+>[!NOTE]
+> The Azure AD Connect sync V2 endpoint API is now available in these Azure environments:
+> - Azure Commercial
+> - Azure China cloud
+> - Azure US Government cloud
+> It will not be made available in the Azure German cloud
+ ### Release status 3/31/2021: Released for download only, not available for auto upgrade
Please follow this link to read more about [auto upgrade](how-to-connect-install
>[!NOTE] > - This release will be made available for download only. > - The upgrade to this release will require a full synchronization due to sync rule changes.
-> - This release defaults the AADConnect server to the new V2 end point. Note that this end point is not supported in the German national cloud, the Chinese national cloud and the US government cloud and if you need to deploy this version in these clouds you need to follow [these instructions](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2#rollback) to switch back to the V1 end point. Failure to do so will result in errors in synchronization.
+> - This release defaults the AADConnect server to the new V2 end point. Note that this end point is not supported in the German national cloud and if you need to deploy this version in this environment you need to follow [these instructions](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-endpoint-api-v2#rollback) to switch back to the V1 end point. Failure to do so will result in errors in synchronization.
### Release status 3/19/2021: Released for download, not available for auto upgrade
active-directory Application Proxy Connectors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-connectors.md
You don't have to manually delete connectors that are unused. When a connector i
## Automatic updates
-Azure AD provides automatic updates for all the connectors that you deploy. As long as the Application Proxy Connector Updater service is running, your connectors update automatically. If you donΓÇÖt see the Connector Updater service on your server, you need to [reinstall your connector](application-proxy-add-on-premises-application.md) to get any updates.
+Azure AD provides automatic updates for all the connectors that you deploy. As long as the Application Proxy Connector Updater service is running, your connectors [update with the latest major connector release](application-proxy-faq.yml#why-is-my-connector-still-using-an-older-version-and-not-auto-upgraded-to-latest-version-) automatically. If you donΓÇÖt see the Connector Updater service on your server, you need to [reinstall your connector](application-proxy-add-on-premises-application.md) to get any updates.
If you don't want to wait for an automatic update to come to your connector, you can do a manual upgrade. Go to the [connector download page](https://download.msappproxy.net/subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/connector/download) on the server where your connector is located and select **Download**. This process kicks off an upgrade for the local connector.
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
na Previously updated : 03/16/2021 Last updated : 04/16/2021
# Azure Active Directory sign-in activity reports - preview
-The reporting architecture in Azure Active Directory (Azure AD) consists of the following components:
+The Azure Active Directory portal gives you access to three activity logs:
+
+- **Sign-ins** ΓÇô Information about sign-ins and how your resources are used by your users.
+- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources.
+- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
-- **Activity**
- - **Sign-ins** ΓÇô Information about when users, applications, and managed resources sign in to Azure AD and access resources.
- - **Audit logs** - [Audit logs](concept-audit-logs.md) provide system activity information about users and group management, managed applications, and directory activities.
-- **Security**
- - **Risky sign-ins** - A [risky sign-in](../identity-protection/overview-identity-protection.md) is an indicator for a sign-in attempt by someone who isn't the legitimate owner of a user account.
- - **Users flagged for risk** - A [risky user](../identity-protection/overview-identity-protection.md) is an indicator for a user account that might have been compromised.
The classic sign-ins report in Azure Active Directory provides you with an overview of interactive user sign-ins. In addition, you now have access to three additional sign-in reports that are now in preview:
Interactive user sign-ins are sign-ins where a user provides an authentication f
-Note: The interactive user sign-ins report used to contain some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non interactive, they were included in the interactive user sign-ins report for additional visibility. Once the non-interactive user sign-ins report entered public preview in November 2020, those non-interactive sign-in event logs were moved to the non-interactive user sign in report for increased accuracy.
+> [!NOTE]
+> The interactive user sign-ins report used to contain some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non-interactive, they were included in the interactive user sign-ins report for additional visibility. Once the non-interactive user sign-ins report entered public preview in November 2020, those non-interactive sign-in event logs were moved to the non-interactive user sign in report for increased accuracy.
**Report size:** small <br>
Select an item in the list view to display all sign-ins that are grouped under a
Select a grouped item to see all details of the sign-in.
+## Sign-in error code
+
+If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item.
+
+![Screenshot shows a detailed information view.](./media/concept-all-sign-ins/error-code.png)
+
+While the log item provides you with a failure reason, there are cases where you might get more information using the [sign-in error lookup tool](https://login.microsoftonline.com/error). For example, if available, this tool provides you with remediation steps.
+
+![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
+++ ## Filter sign-in activities By setting a filter, you can narrow down the scope of the returned sign-in data. Azure AD provides you with a broad range of additional filters you can set. When setting your filter, you should always pay special attention to your configured **Date** range filter. A proper date range filter ensures that Azure AD only returns the data you really care about.
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-sign-ins.md
na Previously updated : 03/24/2020 Last updated : 04/16/2021
# Sign-in activity reports in the Azure Active Directory portal
-The reporting architecture in Azure Active Directory (Azure AD) consists of the following components:
+The Azure Active Directory portal gives you access to three activity logs:
-- **Activity**
- - **Sign-ins** ΓÇô Information about the usage of managed applications and user sign-in activities.
- - **Audit logs** - [Audit logs](concept-audit-logs.md) provide system activity information about users and group management, managed applications, and directory activities.
- - **Provisioning logs** - [Provisioning logs](./concept-provisioning-logs.md) allow customers to monitor activity by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
-- **Security**
- - **Risky sign-ins** - A [risky sign-in](../identity-protection/overview-identity-protection.md) is an indicator for a sign-in attempt by someone who isn't the legitimate owner of a user account.
- - **Users flagged for risk** - A [risky user](../identity-protection/overview-identity-protection.md) is an indicator for a user account that might have been compromised.
+- **Sign-ins** ΓÇô Information about sign-ins and how your resources are used by your users.
+- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant such as users and group management or updates applied to your tenantΓÇÖs resources.
+- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by the provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.
This article gives you an overview of the sign-ins report.
Select an item in the list view to get more detailed information.
> For more information, see the [Frequently asked questions about CA information in all sign-ins](reports-faq.md#conditional-access).
+## Sign-in error code
+
+If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item.
+
+![sign-in error code](./media/concept-all-sign-ins/error-code.png)
+
+While the log item provides you with a failure reason, there are cases where you might get more information using the [sign-in error lookup tool](https://login.microsoftonline.com/error). For example, if available, this tool provides you with remediation steps.
+
+![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
++ ## Filter sign-in activities
active-directory Bentley Automatic User Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bentley-automatic-user-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Bentley - Automatic User Provisioning for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Bentley - Automatic User Provisioning.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 08778fff-f252-45c2-95d4-cc640c288af3
+++
+ na
+ms.devlang: na
+ Last updated : 04/13/2021+++
+# Tutorial: Configure Bentley - Automatic User Provisioning for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Bentley - Automatic User Provisioning and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Bentley - Automatic User Provisioning](https://www.bentley.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Bentley - Automatic User Provisioning
+> * Remove users in Bentley - Automatic User Provisioning when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Bentley - Automatic User Provisioning
+> * Provision groups and group memberships in Bentley - Automatic User Provisioning
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A Federated account with Bentley IMS.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Bentley - Automatic User Provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Bentley - Automatic User Provisioning to support provisioning with Azure AD
+
+Reach out to the Bentley User Provisioning [support](https://communities.bentley.com/communities/other_communities/licensing_cloud_and_web_services/w/wiki/52836/microsoft-azure-ad-automatic-user-provisioning-configuration) team for Tenant URL and Secret Token. These values will be entered in the Provisioning tab of the Bentley application in the Azure portal.
+
+## Step 3. Add Bentley - Automatic User Provisioning from the Azure AD application gallery
+
+Add Bentley - Automatic User Provisioning from the Azure AD application gallery to start managing provisioning to Bentley - Automatic User Provisioning. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Bentley - Automatic User Provisioning, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Bentley - Automatic User Provisioning
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Bentley - Automatic User Provisioning in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Bentley - Automatic User Provisioning**.
+
+ ![The Bentley - Automatic User Provisioning link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Bentley - Automatic User Provisioning Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Bentley - Automatic User Provisioning. If the connection fails, ensure your Bentley - Automatic User Provisioning account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Bentley - Automatic User Provisioning**.
+
+9. Review the user attributes that are synchronized from Azure AD to Bentley - Automatic User Provisioning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Bentley - Automatic User Provisioning for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Bentley - Automatic User Provisioning API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ ||||
+ |userName|String|&check;|
+ |title|String|
+ |emails[type eq "work"].value|String|
+ |preferredLanguage|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |addresses[type eq "work"].streetAddress|String|
+ |addresses[type eq "work"].locality|String|
+ |addresses[type eq "work"].region|String|
+ |addresses[type eq "work"].postalCode|String|
+ |addresses[type eq "work"].country|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |externalId|String|
+ |urn:ietf:params:scim:schemas:extension:Bentley:2.0:User:isSoftDeleted|String|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Bentley - Automatic User Provisioning**.
+
+11. Review the group attributes that are synchronized from Azure AD to Bentley - Automatic User Provisioning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Bentley - Automatic User Provisioning for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for Filtering|
+ ||||
+ |displayName|String|&check;|
+ |externalId|String|
+ |members|Reference|
+ |urn:ietf:params:scim:schemas:extension:Bentley:2.0:Group:description|String|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Bentley - Automatic User Provisioning, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to Bentley - Automatic User Provisioning by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Connector limitations
+* The enterprise extension attribute "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager" is not supported and will be removed.
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Cisco Intersight Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cisco-intersight-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Cisco Intersight | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Cisco Intersight.
++++++++ Last updated : 04/08/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Cisco Intersight
+
+In this tutorial, you'll learn how to integrate Cisco Intersight with Azure Active Directory (Azure AD). When you integrate Cisco Intersight with Azure AD, you can:
+
+* Control in Azure AD who has access to Cisco Intersight.
+* Enable your users to be automatically signed-in to Cisco Intersight with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Cisco Intersight single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Cisco Intersight supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding Cisco Intersight from the gallery
+
+To configure the integration of Cisco Intersight into Azure AD, you need to add Cisco Intersight from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Cisco Intersight** in the search box.
+1. Select **Cisco Intersight** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Cisco Intersight
+
+Configure and test Azure AD SSO with Cisco Intersight using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cisco Intersight.
+
+To configure and test Azure AD SSO with Cisco Intersight, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Cisco Intersight SSO](#configure-cisco-intersight-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Cisco Intersight test user](#create-cisco-intersight-test-user)** - to have a counterpart of B.Simon in Cisco Intersight that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Cisco Intersight** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Sign on URL** text box, type the URL:
+ `https://intersight.com`
+
+ b. In the **Identifier (Entity ID)** text box, type the URL:
+ `www.intersight.com`
+
+1. Your Cisco Intersight application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Cisco Intersight expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration..
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Cisco Intersight application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | First_Name | user.givenname |
+ | Last_Name | user.surname |
+ | memberOf | user.groups |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Cisco Intersight** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cisco Intersight.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Cisco Intersight**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Cisco Intersight SSO
+
+To configure single sign-on on **Cisco Intersight** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Cisco Intersight support team](mailto:intersight-feedback@cisco.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Cisco Intersight test user
+
+In this section, you create a user called Britta Simon in Cisco Intersight. Work with [Cisco Intersight support team](mailto:intersight-feedback@cisco.com) to add the users in the Cisco Intersight platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Cisco Intersight Sign-on URL where you can initiate the login flow.
+
+* Go to Cisco Intersight Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Cisco Intersight tile in the My Apps, this will redirect to Cisco Intersight Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Cisco Intersight you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Desknets Neo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/desknets-neo-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with desknets NEO | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and desknets NEO.
++++++++ Last updated : 04/08/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with desknet's NEO
+
+In this tutorial, you'll learn how to integrate desknet's NEO with Azure Active Directory (Azure AD). When you integrate desknet's NEO with Azure AD, you can:
+
+* Control in Azure AD who has access to desknet's NEO.
+* Enable your users to be automatically signed-in to desknet's NEO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* desknet's NEO single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* desknet's NEO supports **SP** initiated SSO.
+
+## Adding desknet's NEO from the gallery
+
+To configure the integration of desknet's NEO into Azure AD, you need to add desknet's NEO from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **desknet's NEO** in the search box.
+1. Select **desknet's NEO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for desknet's NEO
+
+Configure and test Azure AD SSO with desknet's NEO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in desknet's NEO.
+
+To configure and test Azure AD SSO with desknet's NEO, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure desknet's NEO SSO](#configure-desknets-neo-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create desknet's NEO test user](#create-desknets-neo-test-user)** - to have a counterpart of B.Simon in desknet's NEO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **desknet's NEO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.dn-cloud.com`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.dn-cloud.com/cgi-bin/dneo/zsaml.cgi`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.dn-cloud.com/cgi-bin/dneo/dneo.cgi`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [desknet's NEO Client support team](mailto:cloudsupport@desknets.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up desknet's NEO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
++
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to desknet's NEO.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **desknet's NEO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure desknet's NEO SSO
+
+1. Sign in to your desknet's NEO company site as an administrator.
+
+1. In the menu, click **SAML authentication link settings** icon.
+
+ ![Screenshot for SAML authentication link settings.](./media/desknets-neo-tutorial/saml-authentication-icon.png)
+
+1. In the **Common settings**, click **use** from SAML Authentication Collaboration.
+
+ ![Screenshot for SAML authentication use.](./media/desknets-neo-tutorial/saml-authentication-use.png)
+
+1. Perform the below steps in the **SAML authentication link settings** section.
+
+ ![Screenshot for SAML authentication link settings section.](./media/desknets-neo-tutorial/saml-authentication.png)
+
+ a. In the **Access URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal.
+
+ b. In the **SP Entity ID** textbox, paste the **Identifier** value, which you have copied from the Azure portal.
+
+ c. Click **Choose File** to upload the downloaded **Certificate (Base64)** file from the Azure portal into the **x.509 Certificate** textbox.
+
+ d. Click **change**.
+
+### Create desknet's NEO test user
+
+1. Sign in to your desknet's NEO company site as an administrator.
+
+1. In the **menu**, click **Administrator settings** icon.
+
+ ![Screenshot for Administrator settings.](./media/desknets-neo-tutorial/administrator-settings.png)
+
+1. Click **settings** icon and select **User management** in the **Custom settings**.
+
+ ![Screenshot for User management settings.](./media/desknets-neo-tutorial/user-management.png)
+
+1. Click **Create user information**.
+
+ ![Screenshot for User information button.](./media/desknets-neo-tutorial/create-new-user.png)
+
+1. Fill the required fields in the below page and click **creation**.
+
+ ![Screenshot for User creation section.](./media/desknets-neo-tutorial/create-new-user-2.png)
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to desknet's NEO Sign-on URL where you can initiate the login flow.
+
+* Go to desknet's NEO Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the desknet's NEO tile in the My Apps, this will redirect to desknet's NEO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure desknet's NEO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Dropboxforbusiness Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in Dropbox for Business and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to Dropbox for Business. > [!IMPORTANT]
-> Microsoft and Dropbox will be deprecating the old Dropbox integration effective 04/01/2021. To avoid disruption of service, we recommend migrating to the new Dropbox integration which supports Groups. To migrate to the new Dropbox integration, add and configure a new instance of Dropbox for Provisioning in your Azure AD tenant using the steps below. Once you have configured the new Dropbox integration, disable Provisioning on the old Dropbox integration to avoid Provisioning conflicts. For more detailed steps on migrating to the new Dropbox integration, see [Update to the newest Dropbox for Business application using Azure AD](https://help.dropbox.com/installs-integrations/third-party/update-dropbox-azure-ad-connector).
+> In the future, Microsoft and Dropbox will be deprecating the old Dropbox integration. This was originally planned for 4/1/2021, but has been postponed indefinitely. However, to avoid disruption of service, we recommend migrating to the new SCIM 2.0 Dropbox integration which supports Groups. To migrate to the new Dropbox integration, add and configure a new instance of Dropbox for Provisioning in your Azure AD tenant using the steps below. Once you have configured the new Dropbox integration, disable Provisioning on the old Dropbox integration to avoid Provisioning conflicts. For more detailed steps on migrating to the new Dropbox integration, see [Update to the newest Dropbox for Business application using Azure AD](https://help.dropbox.com/installs-integrations/third-party/update-dropbox-azure-ad-connector).
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
active-directory Logicgate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/logicgate-provisioning-tutorial.md
This operation starts the initial synchronization cycle of all users and groups
Once you've configured provisioning, use the following resources to monitor your deployment: 1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+2. Check the [progress bar](/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status). ## Additional resources
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
If you decide to sign up for the free Microsoft 365 developer program, you need
At this point, you have created a tenant with 25 E5 user licenses. The E5 licenses include Azure AD P2 licenses. Optionally, you can add sample data packs with users, groups, mail, and SharePoint to help you test in your development environment. For the Verifiable Credential Issuing service, they are not required.
-For your convenience, you could add your own work account as [guest](https://docs.microsoft.com/azure/active-directory/b2b/b2b-quickstart-add-guest-users-portal.md) in the newly created tenant and use that account to administer the tenant. If you want the guest account to be able to manage the Verifiable Credential Service you need to assign the role 'Global Administrator' to that user.
+For your convenience, you could add your own work account as [guest](/azure/active-directory/external-identities/b2b-quickstart-add-guest-users-portal) in the newly created tenant and use that account to administer the tenant. If you want the guest account to be able to manage the Verifiable Credential Service you need to assign the role 'Global Administrator' to that user.
## Next steps
-Now that you have a developer account you can try our [first tutorial](get-started-verifiable-credentials.md) to learn more about verifiable credentials.
+Now that you have a developer account you can try our [first tutorial](get-started-verifiable-credentials.md) to learn more about verifiable credentials.
active-directory How To Dnsbind https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/how-to-dnsbind.md
If any of the above are not true, the Microsoft Authenticator will display a ful
4. Copy your DID and open the [ION Network Explorer](https://identity.foundation/ion/explorer) to verify the same domain is included in the DID Document.
-5. Host the well-known config resource at the location specified. Example: https://www.example.com/.well-known/did-configuration.json
+5. Host the well-known config resource at the location specified. Example: `https://www.example.com/.well-known/did-configuration.json`
6. Test out issuing or presenting with Microsoft Authenticator to validate. Make sure the setting in Authenticator 'Warn about unsafe apps' is toggled on.
Congratulations, you now have bootstrapped the web of trust with your DID!
## Next steps
-If during onboarding you enter the wrong domain information of you decide to change it, you will need to [opt out](how-to-opt-out.md). At this time, we don't support updating your DID document. Opting out and opting back in will create a brand new DID.
+If during onboarding you enter the wrong domain information of you decide to change it, you will need to [opt out](how-to-opt-out.md). At this time, we don't support updating your DID document. Opting out and opting back in will create a brand new DID.
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
You can reference upcoming version releases and deprecations on the [AKS Kuberne
For new **minor** versions of Kubernetes: * AKS publishes a pre-announcement with the planned date of a new version release and respective old version deprecation on the [AKS Release notes](https://aka.ms/aks/releasenotes) at least 30 days prior to removal.
+ * AKS uses [Azure Advisor](https://docs.microsoft.com/azure/advisor/advisor-overview) to alert users if a new version will cause issues in their cluster because of deprecated APIs. Azure Advisor is also used to alert the user if they are currently out of support.
* AKS publishes a [service health notification](../service-health/service-health-overview.md) available to all users with AKS and portal access, and sends an email to the subscription administrators with the planned version removal dates. ````
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.17 | Dec-09-19 | Jan 2019 | Jul 2020 | 1.20 GA |
| 1.18 | Mar-23-20 | May 2020 | Aug 2020 | 1.21 GA | | 1.19 | Aug-04-20 | Sep 2020 | Nov 2020 | 1.22 GA | | 1.20 | Dec-08-20 | Jan 2021 | Mar 2021 | 1.23 GA |
-| 1.21 | Apr-08-21* | May 2021 | Jun 2021 | 1.24 GA |
+| 1.21 | Apr-08-21 | May 2021 | Jun 2021 | 1.24 GA |
-\* The Kubernetes 1.21 Upstream release is subject to change as the Upstream calender as yet to be finalized.
## FAQ
+**How does Microsoft notify me of new Kubernetes versions?**
+
+The AKS team publishes pre-announcements with planned dates of the new Kubernetes versions in our documentation, our [GitHub](https://github.com/Azure/AKS/releases) as well as emails to subscription administrators who own clusters that are going to fall out of support. In addition to announcements, AKS also uses [Azure Advisor](https://docs.microsoft.com/azure/advisor/advisor-overview) to notify the customer inside the Azure Portal to alert users if they are out of support, as well as alerting them of deprecated APIs that will affect their application or development process.
+ **How often should I expect to upgrade Kubernetes versions to stay in support?** Starting with Kubernetes 1.19, the [open source community has expanded support to 1 year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you will be able to upgrade at a minimum of once a year to stay on a supported version.
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
By publishing and managing your APIs via Azure API Management, you're taking advantage of fault tolerance and infrastructure capabilities that you'd otherwise design, implement, and manage manually. The Azure platform mitigates a large fraction of potential failures at a fraction of the cost.
-To recover from availability problems that affect the region that hosts your API Management service, be ready to reconstitute your service in another region at any time. Depending on your recovery time objective, you might want to keep a standby service in one or more regions. You might also try to maintain their configuration and content in sync with the active service according to your recovery point objective. The service backup and restore features provides the necessary building blocks for implementing disaster recovery strategy.
+To recover from availability problems that affect the region that hosts your API Management service, be ready to reconstitute your service in another region at any time. Depending on your recovery time objective, you might want to keep a standby service in one or more regions. You might also try to maintain their configuration and content in sync with the active service according to your recovery point objective. The service backup and restore features provide the necessary building blocks for implementing disaster recovery strategy.
-Backup and restore operations can also be used for replicating API Management service configuration between operational environments, e.g. development and staging. Beware that runtime data such as users and subscriptions will be copied as well, which might not always be desirable.
+Backup and restore operations can also be used for replicating API Management service configuration between operational environments, for example, development and staging. Beware that runtime data such as users and subscriptions will be copied as well, which might not always be desirable.
This guide shows how to automate backup and restore operations and how to ensure successful authenticating of backup and restore requests by Azure Resource Manager.
All of the tasks that you do on resources using the Azure Resource Manager must
5. Choose **Azure Service Management**. 6. Press **Select**.
- ![Add permissions](./media/api-management-howto-disaster-recovery-backup-restore/add-app.png)
+ :::image type="content" source="./media/api-management-howto-disaster-recovery-backup-restore/add-app-permission.png" alt-text="Screenshot that shows how to add app permissions.":::
7. Click **Delegated Permissions** beside the newly added application, check the box for **Access Azure Service Management (preview)**.+
+ :::image type="content" source="./media/api-management-howto-disaster-recovery-backup-restore/delegated-app-permission.png" alt-text="Screenshot that shows adding delegated app permissions.":::
+ 8. Press **Select**.
-9. Click **Grant Permissions**.
+9. Click **Add Permissions**.
### Configuring your app
api-management Mock Api Responses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/mock-api-responses.md Binary files differ
app-service App Service App Service Environment Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/app-service-app-service-environment-web-application-firewall.md
Once you log in, you should see a dashboard like the one in the following image
![Management Dashboard][ManagementDashboard]
-Clicking on the **Services** tab lets you configure your WAF for services it is protecting. For more details on configuring your Barracuda WAF, see [their documentation](https://techlib.barracuda.com/waf/getstarted1). In the following example, an App Service app serving traffic on HTTP and HTTPS has been configured.
+Clicking on the **Services** tab lets you configure your WAF for services it is protecting. For more details on configuring your Barracuda WAF, see [their documentation](https://campus.barracuda.com/product/webapplicationfirewall/doc/4259884/configure-the-barracuda-web-application-firewall-from-the-web-interface/). In the following example, an App Service app serving traffic on HTTP and HTTPS has been configured.
![Management Add Services][ManagementAddServices]
Replace the SourceAddressPrefix with the Virtual IP Address (VIP) of your WAF's
[ManagementLoginPage]: ./media/app-service-app-service-environment-web-application-firewall/ManagementLoginPage.png [TrafficManagerEndpoint]: ./media/app-service-app-service-environment-web-application-firewall/TrafficManagerEndpoint.png [ConfigureTrafficManager]: ./media/app-service-app-service-environment-web-application-firewall/ConfigureTrafficManager.png
-[WebsiteTranslations]: ./media/app-service-app-service-environment-web-application-firewall/WebsiteTranslations.png
+[WebsiteTranslations]: ./media/app-service-app-service-environment-web-application-firewall/WebsiteTranslations.png
automation Automation Tutorial Troubleshoot Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-tutorial-troubleshoot-changes.md
- Title: Troubleshoot changes on an Azure VM in Azure Automation | Microsoft Docs
-description: This article tells how to troubleshoot changes on an Azure VM.
--
-keywords: change, tracking, change tracking, inventory, automation
Previously updated : 03/21/2021----
-# Troubleshoot changes on an Azure VM
-
-In this tutorial, you learn how to troubleshoot changes on an Azure virtual machine. By enabling Change Tracking and Inventory, you can track changes to software, files, Linux daemons, Windows Services, and Windows Registry keys on your computers.
-Identifying these configuration changes can help you pinpoint operational issues across your environment.
-
-In this tutorial you learn how to:
-
-> [!div class="checklist"]
-> * Enable Change Tracking and Inventory for a VM
-> * Search change logs for stopped services
-> * Configure change tracking
-> * Enable Activity log connection
-> * Trigger an event
-> * View changes
-> * Configure alerts
-
-## Prerequisites
-
-To complete this tutorial, you need:
-
-* An Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An [Automation account](./index.yml) to hold the watcher and action runbooks and the Watcher task.
-* A [virtual machine](../virtual-machines/windows/quick-create-portal.md) to enable for the feature.
-
-## Sign in to Azure
-
-Sign in to the Azure portal at https://portal.azure.com.
-
-## Enable Change Tracking and Inventory
-
-First you need to enable Change Tracking and Inventory for this tutorial. If you've previously enabled the feature, this step is not necessary.
-
->[!NOTE]
->If the fields are grayed out, another Automation feature is enabled for the VM, and you must use same workspace and Automation account.
-
-1. Select **Virtual machines** and select a VM from the list.
-2. On the left menu, select **Inventory** under **Operations**. The Inventory page opens.
-
- ![Enable change](./media/automation-tutorial-troubleshoot-changes/enableinventory.png)
-
-3. Choose the [Log Analytics](../azure-monitor/logs/log-query-overview.md) workspace. This workspace collects data that is generated by features such as Change Tracking and Inventory. The workspace provides a single location to review and analyze data from multiple sources.
-
- [!INCLUDE [azure-monitor-log-analytics-rebrand](../../includes/azure-monitor-log-analytics-rebrand.md)]
-
-4. Select the Automation account to use.
-
-5. Configure the location for the deployment.
-
-5. Click **Enable** to deploy the feature for your VM.
-
-During setup, the VM is provisioned with the Log Analytics agent for Windows and a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md). Enabling Change Tracking and Inventory can take up to 15 minutes. During this time, you shouldn't close the browser window.
-
-After the feature is enabled, information about installed software and changes on the VM flows to Azure Monitor logs.
-It can take between 30 minutes and 6 hours for the data to be available for analysis.
-
-## Use Change Tracking and Inventory in Azure Monitor logs
-
-Change Tracking and Inventory generates log data that is sent to Azure Monitor logs. To search the logs by running queries, select **Log Analytics** at the top of the Change tracking page. Change tracking data is stored under the type `ConfigurationChange`.
-
-The following example Log Analytics query returns all the Windows services that have been stopped.
-
-```loganalytics
-ConfigurationChange
-| where ConfigChangeType == "WindowsServices" and SvcState == "Stopped"
-```
-
-To learn more about running and searching log files in Azure Monitor logs, see [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md).
-
-## Configure change tracking
-
-With change tracking, you choose the files and registry keys to collect and track using **Edit settings** at the top of the Change tracking page on your VM. You can add Windows registry keys, Windows files, or Linux files to track on the Workspace Configuration page.
-
-> [!NOTE]
-> Both change tracking and inventory use the same collection settings, and settings are configured on a workspace level.
-
-### Add a Windows registry key
-
-1. On the **Windows Registry** tab, select **Add**.
-
-1. On the Add Windows Registry for Change Tracking page, enter the information for the key to track and click **Save**
-
- |Property |Description |
- |||
- |Enabled | Determines if the setting is applied |
- |Item Name | Friendly name of the file to be tracked |
- |Group | A group name for logically grouping files |
- |Windows Registry Key | The path to check for the file For example: "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders\Common Startup" |
-
-### Add a Windows file
-
-1. On the **Windows Files** tab, select **Add**.
-
-1. On the Add Windows File for Change Tracking page, enter the information for the file or directory to track and click **Save**
-
- |Property |Description |
- |||
- |Enabled | Determines if the setting is applied |
- |Item Name | Friendly name of the file to be tracked |
- |Group | A group name for logically grouping files |
- |Enter Path | The path to check for the file For example: "c:\temp\\\*.txt"<br>You can also use environment variables such as "%winDir%\System32\\\*.*" |
- |Recursion | Determines if recursion is used when looking for the item to be tracked. |
- |Upload file content for all settings| Turns on or off file content upload on tracked changes. Available options: **True** or **False**.|
-
-### Add a Linux file
-
-1. On the **Linux Files** tab, select **Add**.
-
-1. On the Add Linux File for Change Tracking page, enter the information for the file or directory to track and click **Save**.
-
- |Property |Description |
- |||
- |Enabled | Determines if the setting is applied |
- |Item Name | Friendly name of the file to be tracked |
- |Group | A group name for logically grouping files |
- |Enter Path | The path to check for the file For example: "/etc/*.conf" |
- |Path Type | Type of item to be tracked, possible values are File and Directory |
- |Recursion | Determines if recursion is used when looking for the item to be tracked. |
- |Use Sudo | This setting determines if sudo is used when checking for the item. |
- |Links | This setting determines how symbolic links dealt with when traversing directories.<br> **Ignore** - Ignores symbolic links and does not include the files/directories referenced<br>**Follow** - Follows the symbolic links during recursion and also includes the files/directories referenced<br>**Manage** - Follows the symbolic links and allows alter the treatment of returned content |
- |Upload file content for all settings| Turns on or off file content upload on tracked changes. Available options: True or False.|
-
- > [!NOTE]
- > The **Manage** value for the **Links** property is not recommended. File content retrieval is not supported.
-
-## Enable Activity log connection
-
-1. From the Change tracking page on your VM, select **Manage Activity Log Connection**.
-
-2. On the Azure Activity log page, click **Connect** to connect Change Tracking and Inventory to the Azure activity log for your VM.
-
-3. Navigate to the Overview page for your VM and select **Stop** to stop your VM.
-
-4. When prompted, select **Yes** to stop the VM.
-
-5. When the VM is deallocated, select **Start** to restart it. Stopping and starting a VM logs an event in its Activity Log.
-
-## View changes
-
-1. Navigate back to the Change tracking page and select the **Events** tab at the bottom of the page.
-
-2. After a while, change tracking events are shown in the chart and the table. The chart shows changes that have occurred over time. The line graph at the top displays Azure Activity Log events. Each row of bar graphs represents a different trackable change type. These types are Linux daemons, files, Windows registry keys, software, and Windows services. The change tab shows the details for the displayed changes, with the most recent change displayed first.
-
- ![View events in the portal](./media/automation-tutorial-troubleshoot-changes/viewevents.png)
-
-3. Notice that there have been multiple changes to the system, including changes to services and software. You can use the filters at the top of the page to filter the results by **Change type** or by a time range.
-
- ![List of changes to the VM](./media/automation-tutorial-troubleshoot-changes/change-tracking-list.png)
-
-4. Select a **WindowsServices** change. This selection opens the Change Details page showing details about the change and the values before and after the change. In this instance, the Software Protection service was stopped.
-
- ![Viewing change details in the portal](./media/automation-tutorial-troubleshoot-changes/change-details.png)
-
-## Configure alerts
-
-Viewing changes in the Azure portal can be helpful, but being able to be alerted when a change occurs, such as a stopped service is more beneficial. Let's add an alert for a stopped service.
-
-1. In the Azure portal, go to **Monitor**.
-
-2. Select **Alerts** under **Shared Services**, and click **+ New alert rule**.
-
-3. Click **Select** to choose a resource.
-
-4. On the Select a resource page, choose **Log Analytics** from the **Filter by resource type** dropdown menu.
-
-5. Select your Log Analytics workspace, and then click **Done**.
-
- ![Select a resource](./media/automation-tutorial-troubleshoot-changes/select-a-resource.png)
-
-6. Click **Add condition**.
-
-7. In the table on the Configure signal logic page, select **Custom log search**.
-
-8. Enter the following query in the search query text box:
-
- ```loganalytics
- ConfigurationChange | where ConfigChangeType == "WindowsServices" and SvcName == "W3SVC" and SvcState == "Stopped" | summarize by Computer
- ```
-
- This query returns the computers that had the W3SVC service stopped in the specified timeframe.
-
-9. For **Threshold** under **Alert logic**, enter **0**. When you're finished, click **Done**.
-
- ![Configure signal logic](./media/automation-tutorial-troubleshoot-changes/configure-signal-logic.png)
-
-10. Select **Create New** under **Action Groups**. An action group is a group of actions that you can use across multiple alerts. The actions can include but are not limited to email notifications, runbooks, webhooks, and many more. To learn more about action groups, see [Create and manage action groups](../azure-monitor/alerts/action-groups.md).
-
-11. Under **Alert details**, enter a name and description for the alert.
-
-12. Set **Severity** to **Informational(Sev 2)**, **Warning(Sev 1)**, or **Critical(Sev 0)**.
-
-13. In the **Action group name** box, enter a name for the alert and a short name. The short name is used in place of a full action group name when notifications are sent using this group.
-
-14. For **Actions**, enter a name for the action, such as **Email Administrators**.
-
-15. For **ACTION TYPE**, select **Email/SMS message/Push/Voice**.
-
-16. For **DETAILS**, select **Edit details**.
-
- :::image type="content" source="./media/automation-tutorial-troubleshoot-changes/add-action-group.png" alt-text="Usage and estimated costs." lightbox="./media/automation-tutorial-troubleshoot-changes/add-action-group.png":::
-
-17. In the **Email/SMS message/Push/Voice** pane, enter a name, select the **Email** checkbox, and then enter a valid email address. When finished, click **OK** on the pane, then click **OK** on the **Add action group** page.
-
-18. To customize the subject of the alert email, select **Customize Actions**.
-
-19. For **Create rule**, select **Email subject**, then choose **Create alert rule**. The alert tells you when an update deployment succeeds, and which machines were part of that update deployment run. The following image is an example email received when the W3SVC service stops.
-
- ![Screen capture shows an email notification received when the W 3 S V C services stops.](./media/automation-tutorial-troubleshoot-changes/email.png)
-
-## Next steps
-
-In this tutorial you learned how to:
-
-> [!div class="checklist"]
-> * Enable Change Tracking and Inventory for a VM
-> * Search change logs for stopped services
-> * Configure change tracking
-> * Enable Activity Log connection
-> * Trigger an event
-> * View changes
-> * Configure alerts
-
-Continue to the overview for the Change Tracking and Inventory feature to learn more about it.
-
-> [!div class="nextstepaction"]
-> [Change Tracking and Inventory overview](change-tracking/overview.md)
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/update-management.md
Title: Troubleshoot Azure Automation Update Management issues
description: This article tells how to troubleshoot and resolve issues with Azure Automation Update Management. Previously updated : 01/13/2021 Last updated : 04/16/2021
To register the Automation resource provider, follow these steps in the Azure po
5. If it's not listed, register the Microsoft.Automation provider by following the steps at [Resolve errors for resource provider registration](../../azure-resource-manager/templates/error-register-resource-provider.md).
-## <a name="scheduled-update-missed-machines"></a>Scenario: Scheduled update with a dynamic schedule missed some machines
+## <a name="scheduled-update-missed-machines"></a>Scenario: Scheduled update did not patch some machines
### Issue
-Machines included in an update preview don't all appear in the list of machines patched during a scheduled run.
+Machines included in an update preview don't all appear in the list of machines patched during a scheduled run, or VMs for selected scopes of a dynamic group are not showing up in the update preview list in the portal.
+
+The update preview list consists of all machines retrieved by an [Azure Resource Graph](../../governance/resource-graph/overview.md) query for the selected scopes. The scopes are filtered for machines that have a system Hybrid Runbook Worker installed and for which you have access permissions.
### Cause
This issue can have one of the following causes:
* The machines weren't available or didn't have appropriate tags when the schedule executed.
+* You don't have the correct access on the selected scopes.
+
+* The Azure Resource Graph query doesn't retrieve the expected machines.
+
+* The system Hybrid Runbook Worker isn't installed on the machines.
+ ### Resolution #### Subscriptions not configured for registered Automation resource provider
Use the following procedure if your subscription is configured for the Automatio
7. Rerun the update schedule to ensure that deployment with the specified dynamic groups includes all machines.
-## <a name="machines-not-in-preview"></a>Scenario: Expected machines don't appear in preview for dynamic group
-
-### Issue
-
-VMs for selected scopes of a dynamic group are not showing up in the Azure portal preview list. This list consists of all machines retrieved by an ARG query for the selected scopes. The scopes are filtered for machines that have Hybrid Runbook Workers installed and for which you have access permissions.
-
-### Cause
-
-Here are possible causes for this issue:
-
-* You don't have the correct access on the selected scopes.
-* The ARG query doesn't retrieve the expected machines.
-* Hybrid Runbook Worker isn't installed on the machines.
-
-### Resolution
- #### Incorrect access on selected scopes The Azure portal only displays machines for which you have write access in a given scope. If you don't have the correct access for a scope, see [Tutorial: Grant a user access to Azure resources using the Azure portal](../../role-based-access-control/quickstart-assign-role-user-portal.md).
-#### ARG query doesn't return expected machines
+#### Resource Graph query doesn't return expected machines
Follow the steps below to find out if your queries are working correctly.
-1. Run an ARG query formatted as shown below in the Resource Graph explorer blade in Azure portal. This query mimics the filters you selected when you created the dynamic group in Update Management. See [Use dynamic groups with Update Management](../update-management/configure-groups.md).
+1. Run an Azure Resource Graph query formatted as shown below in the Resource Graph explorer blade in Azure portal. If you are new to Azure Resource Graph, see this [quickstart](../../governance/resource-graph/first-query-portal.md) to learn how to work with Resource Graph explorer. This query mimics the filters you selected when you created the dynamic group in Update Management. See [Use dynamic groups with Update Management](../update-management/configure-groups.md).
```kusto where (subscriptionId in~ ("<subscriptionId1>", "<subscriptionId2>") and type =~ "microsoft.compute/virtualmachines" and properties.storageProfile.osDisk.osType == "<Windows/Linux>" and resourceGroup in~ ("<resourceGroupName1>","<resourceGroupName2>") and location in~ ("<location1>","<location2>") )
Follow the steps below to find out if your queries are working correctly.
#### Hybrid Runbook Worker not installed on machines
-Machines do appear in ARG query results but still don't show up in the dynamic group preview. In this case, the machines might not be designated as hybrid workers and thus can't run Azure Automation and Update Management jobs. To ensure that the machines you're expecting to see are set up as Hybrid Runbook Workers:
+Machines do appear in Azure Resource Graph query results, but still don't show up in the dynamic group preview. In this case, the machines might not be designated as system Hybrid Runbook workers and thus can't run Azure Automation and Update Management jobs. To ensure that the machines you're expecting to see are set up as system Hybrid Runbook Workers:
1. In the Azure portal, go to the Automation account for a machine that is not appearing correctly.
Machines do appear in ARG query results but still don't show up in the dynamic g
4. Validate that the hybrid worker is present for that machine.
-5. If the machine is not set up as a hybrid worker, make adjustments using instructions at [Automate resources in your datacenter or cloud by using Hybrid Runbook Worker](../automation-hybrid-runbook-worker.md).
-
-6. Join the machine to the Hybrid Runbook Worker group.
+5. If the machine is not set up as a system Hybrid Runbook Worker, review the methods to enable the machine under the [Enable Update Management](../update-management/overview.md#enable-update-management) section of the Update Management Overview article. The method to enable is based on the environment the machine is running in.
-7. Repeat the steps above for all machines that have not been displaying in the preview.
+6. Repeat the steps above for all machines that have not been displaying in the preview.
## <a name="components-enabled-not-working"></a>Scenario: Update Management components enabled, while VM continues to show as being configured
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md Binary files differ
azure-monitor Alerts Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-common-schema-definitions.md
description: Understanding the common alert schema definitions for Azure Monitor
Last updated 04/12/2021- # Common alert schema definitions
Any alert instance describes the resource that was affected and the cause of the
| Field | Description| |:|:|
-| alertId | The GUID uniquely identifying the alert instance. |
+| alertId | The unique resource ID identifying the alert instance. |
| alertRule | The name of the alert rule that generated the alert instance. | | Severity | The severity of the alert. Possible values: Sev0, Sev1, Sev2, Sev3, or Sev4. | | signalType | Identifies the signal on which the alert rule was defined. Possible values: Metric, Log, or Activity Log. |
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/create-new-resource.md
Sign in to the [Azure portal](https://portal.azure.com), and create an Applicati
| **Name** | `Unique value` | Name that identifies the app you are monitoring. | | **Resource Group** | `myResourceGroup` | Name for the new or existing resource group to host App Insights data. | | **Region** | `East US` | Choose a location near you, or near where your app is hosted. |
- | **Resource Mode** | `Classic` or `Workspace-based` | Workspace-based resources are currently in public preview and allow you to send your Application Insights telemetry to a common Log Analytics workspace. For more information, see the [article on workspace-based resources](create-workspace-resource.md).
+ | **Resource Mode** | `Classic` or `Workspace-based` | Workspace-based resources allow you to send your Application Insights telemetry to a common Log Analytics workspace. For more information, see the [article on workspace-based resources](create-workspace-resource.md).
> [!NOTE] > While you can use the same resource name across different resource groups, it can be beneficial to use a globally unique name. This can be useful if you plan to [perform cross resource queries](../logs/cross-workspace-query.md#identifying-an-application) as it simplifies the required syntax.
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ilogger.md
This code is required only when you use a standalone logging provider. For regul
} ```
+> [!IMPORTANT]
+> New Azure regions **require** the use of connection strings instead of instrumentation keys. [Connection string](./sdk-connection-string.md?tabs=net) identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable.
+ ## Next steps Learn more about:
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/opencensus-python.md
Last updated 09/24/2020 ++ # Set up Azure Monitor for your Python application
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
As explained in [Planning your Private Link setup](#planning-your-private-link-s
You can automate the process described earlier using Azure Resource Manager templates, REST, and command-line interfaces.
-To create and manage private link scopes, use the [REST API](/rest/api/monitor/private%20link%20scopes%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
+To create and manage private link scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
To manage network access, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
If you're connecting to your Azure Monitor resources over a Private Link, traffi
## Next steps -- Learn about [private storage](private-storage.md)
+- Learn about [private storage](private-storage.md)
azure-resource-manager Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 04/08/2021 Last updated : 04/16/2021 # Deletion of Azure resources for complete mode deployments
Jump to a resource provider namespace:
> | galleries | Yes | > | galleries / applications | No | > | galleries / applications / versions | No |
-> | galleries / images | No |
-> | galleries / images / versions | No |
+> | galleries / images | Yes |
+> | galleries / images / versions | Yes |
> | hostGroups | Yes | > | hostGroups / hosts | Yes | > | images | Yes |
azure-sql Elastic Pool Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-manage.md
To create and manage SQL Database elastic pools and pooled databases, use these
|[Elastic pools - Delete](/rest/api/sql/elasticpools/delete)|Deletes the elastic pool.| |[Elastic pools - Get](/rest/api/sql/elasticpools/get)|Gets an elastic pool.| |[Elastic pools - List by server](/rest/api/sql/elasticpools/listbyserver)|Returns a list of elastic pools in a server.|
-|[Elastic pools - Update](/rest/api/sql/elasticpools/listbyserver)|Updates an existing elastic pool.|
+|[Elastic pools - Update](/rest/api/sql/2020-11-01-preview/elasticpools/update
+)|Updates an existing elastic pool.|
|[Elastic pool activities](/rest/api/sql/elasticpoolactivities)|Returns elastic pool activities.| |[Elastic pool database activities](/rest/api/sql/elasticpooldatabaseactivities)|Returns activity on databases inside of an elastic pool.| |[Databases - Create or update](/rest/api/sql/databases/createorupdate)|Creates a new database or updates an existing database.|
To create and manage SQL Database elastic pools and pooled databases, use these
## Next steps * To learn more about design patterns for SaaS applications using elastic pools, see [Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database](saas-tenancy-app-design-patterns.md).
-* For a SaaS tutorial using elastic pools, see [Introduction to the Wingtip SaaS application](saas-dbpertenant-wingtip-app-overview.md).
+* For a SaaS tutorial using elastic pools, see [Introduction to the Wingtip SaaS application](saas-dbpertenant-wingtip-app-overview.md).
azure-sql Serverless Tier Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/serverless-tier-overview.md
Previously updated : 4/15/2021 Last updated : 4/16/2021 # Azure SQL Database serverless [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The resources of a serverless database are encapsulated by app package, SQL inst
#### App package
-The app package is the outer most resource management boundary for a database, regardless of whether the database is in a serverless or provisioned compute tier. The app package contains the SQL instance and external services that together scope all user and system resources used by a database in SQL Database. Examples of external services include R and full-text search. The SQL instance generally dominates the overall resource utilization across the app package.
+The app package is the outer most resource management boundary for a database, regardless of whether the database is in a serverless or provisioned compute tier. The app package contains the SQL instance and external services like full-text search that all together scope all user and system resources used by a database in SQL Database. The SQL instance generally dominates the overall resource utilization across the app package.
#### User resource pool
azure-sql Db2 To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md
For additional assistance, see the following resources, which were developed in
||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/Microsoft/DataMigrationTeam/tree/master/IBM%20Db2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[Db2 LUW pure scale on Azure - setup guide](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/db2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
+
+|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/tree/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 LUW pure scale on Azure - setup guide](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
backup Backup Support Matrix Mars Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-mars-agent.md
Title: Support matrix for the MARS agent description: This article summarizes Azure Backup support when you back up machines that are running the Microsoft Azure Recovery Services (MARS) agent. Previously updated : 08/30/2019 Last updated : 04/09/2021
By using the [Instant Restore](backup-instant-restore-capability.md) feature of
Backups can't be restored to a target machine that's running an earlier version of the operating system. For example, a backup taken from a computer that's running Windows 7 can be restored on Windows 8 or later. But a backup taken from a computer that's running Windows 8 can't be restored on a computer that's running Windows 7.
+## Previous MARS agent versions
+
+The following table lists the previous versions of the agent with their download links. We recommend you to upgrade the agent version to the latest, so you can leverage the latest features and optimal performance.
+
+**Versions** | **KB Articles**
+ |
+[2.0.9145.0](https://download.microsoft.com/download/4/5/E/45EB38B4-2DA7-45FA-92E1-5CA1E23D18D1/MARSAgentInstaller.exe) | Not available
+[2.0.9151.0](https://download.microsoft.com/download/7/1/7/7177B70A-51E8-434D-BDF2-FA3A09E917D6/MARSAgentInstaller.exe) | Not available
+[2.0.9153.0](https://download.microsoft.com/download/3/D/D/3DD8A2FF-AC48-4A62-8566-B2C05F0BCCD0/MARSAgentInstaller.exe) | Not available
+[2.0.9162.0](https://download.microsoft.com/download/0/1/0/010E598E-6289-47DB-872A-FFAF5030E6BE/MARSAgentInstaller.exe) | Not available
+[2.0.9169.0](https://download.microsoft.com/download/f/7/1/f716c719-24bc-4337-af48-113baddc14d8/MARSAgentInstaller.exe) | [4515971](https://support.microsoft.com/help/4538314)
+[2.0.9170.0](https://download.microsoft.com/download/1/8/7/187ca9a9-a6e5-45f0-928f-9a843d84aed5/MARSAgentInstaller.exe) | Not available
+[2.0.9173.0](https://download.microsoft.com/download/7/9/2/79263a35-de87-4ba6-9732-65563a4274b6/MARSAgentInstaller.exe) | [4538314](https://support.microsoft.com/help/4538314)
+[2.0.9177.0](https://download.microsoft.com/download/3/0/4/304d3cdf-b123-42ee-ad03-98fb895bc38f/MARSAgentInstaller.exe) | Not available
+[2.0.9181.0](https://download.microsoft.com/download/6/6/9/6698bc49-e30b-4a3e-a1f4-5c859beafdcc/MARSAgentInstaller.exe) | Not available
+[2.0.9190.0](https://download.microsoft.com/download/a/c/e/aceffec0-794e-4259-8107-92a3f6c10f55/MARSAgentInstaller.exe) | [4575948](https://support.microsoft.com/help/4575948)
+[2.0.9195.0](https://download.microsoft.com/download/6/1/3/613b70a7-f400-4806-9d98-ae26aeb70be9/MARSAgentInstaller.exe) | [4582474](https://support.microsoft.com/help/4582474)
+[2.0.9197.0](https://download.microsoft.com/download/2/7/5/27531ace-3100-43bc-b4af-7367680ea66b/MARSAgentInstaller.exe) | [4589598](https://support.microsoft.com/help/4589598)
+[2.0.9207.0](https://download.microsoft.com/download/b/5/a/b5a29638-1cef-4906-b704-4d3d914af76e/MARSAgentInstaller.exe) | [5001305](https://support.microsoft.com/help/5001305)
+
+>[!NOTE]
+>MARS agent versions with minor reliability and performance improvements don't have a KB article.
+ ## Next steps - Learn more about [backup architecture that uses the MARS agent](backup-architecture.md#architecture-direct-backup-of-on-premises-windows-server-machines-or-azure-vm-files-or-folders).
backup Offline Backup Azure Data Box https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/offline-backup-azure-data-box.md
If no other server has offline seeding configured and no other server is depende
From the server you're trying to configure for offline backup, perform the following actions.
-1. Go to the **Manage computer certificate application** > **Personal** tab, and look for the certificate with the name `CB_AzureADCertforOfflineSeeding_<ResourceId>`.
+1. Go to the **Manage computer certificate application** > **Personal** tab, and look for the certificate with the name `CB_AzureADCertforOfflineSeeding_<Timestamp>`.
2. Select the certificate, right-click **All Tasks**, and select **Export** without a private key in the .cer format.
baremetal-infrastructure Concepts Oracle High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/concepts-oracle-high-availability.md
+
+ Title: High availability and disaster recovery for Oracle on BareMetal
+description: Learn about high availability and disaster recovery for Oracle on Azure BareMetal Infrastructure.
++ Last updated : 04/15/2021++
+# High availability and disaster recovery for Oracle on BareMetal
+
+In this article, we'll look at the basics of high availability and disaster recovery. We'll then introduce how you can achieve high availability and disaster recovery in an Oracle environment on the BareMetal Infrastructure.
+
+## High availability vs. disaster recovery
+
+Both high availability and disaster recovery provide coverage, but from different types of failures. They use different features and options of the Oracle Database.
+
+High availability allows a system to overcome multiple failures without affecting the application's user experience. Common characteristics of a highly available system include:
+
+- Redundant hardware that has no single point of failure.
+- Automatic recovery from non-critical failures, such as failed disk drives or faulty network cables.
+- The ability to roll hardware and software changes without any noticeable effect on processing.
+- Meets or exceeds goals for recovery time objectives (RTO) and recovery point objectives (RPO).
+
+The most common feature of Oracle used for high availability is [Oracle Real Application Clusters (RAC)](https://docs.oracle.com/en/database/oracle/oracle-database/19/racad/introduction-to-oracle-rac.html#GUID-5A1B02A2-A327-42DD-A1AD-20610B2A9D92).
+
+Disaster recovery protects you from unrecoverable localized failures that would hurt your primary high availability strategy. In the Oracle ecosystem, it's provided through database replication, also known as [Oracle Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/19/sbydb/preface.html#GUID-B6209E95-9DA8-4D37-9BAD-3F000C7E3590).
+
+## Next steps
+
+Learn more about high availability features for Oracle:
+
+> [!div class="nextstepaction"]
+> [High availability features for Oracle on BareMetal Infrastructure](high-availability-features.md)
baremetal-infrastructure High Availability Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/high-availability-features.md
+
+ Title: High availability features for Oracle on Azure BareMetal
+description: Learn about the features available in BareMetal for an Oracle database.
++ Last updated : 04/15/2021++
+# High availability features for Oracle on Azure BareMetal
+
+In this article, we'll look at the key high availability and disaster recovery features of Oracle.
+
+Oracle offers many features to build a resilient platform for running Oracle databases. While no single feature provides coverage for every type of failure, combining technologies in a layered fashion creates a highly available system. Not every feature is required to maintain availability. But combining strategies gives you the best protection from the assortment of failures that occurs.
+
+## Flashback Database
+
+The [Flashback Database](https://docs.oracle.com/en/database/oracle/oracle-database/21/rcmrf/FLASHBACK-DATABASE.html#GUID-584AC79A-40C5-45CA-8C63-DED3BE3A4511) feature comes in Oracle Database Enterprise Edition. It rewinds the database to a specific point in time. This feature is distinct from a [Recovery Manager (RMAN)](https://docs.oracle.com/en/cloud/paas/db-backup-cloud/csdbb/performing-general-restore-and-recovery-operations.html) point-in-time recovery in that it rewinds from the current point in time, rather than forward-winds after a restore. It results in much faster completion times.
+
+You can use this feature alongside [Oracle Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/19/sbydb/preface.html#GUID-B6209E95-9DA8-4D37-9BAD-3F000C7E3590). Flashback Database allows a database administrator to reinstantiate a failed database back into a Data Guard configuration without a full RMAN restore and recovery. This feature allows you to restore disaster recovery capability (and any offloaded reporting and backup benefits with Active Data Guard) much faster.
+
+You can use this feature instead of a time-delayed redo on the standby database. A standby database can be flashed back to a point prior to a problem.
+
+The Oracle Database keeps flashback logs in the fast recovery area (FRA). These logs are separate from the redo logs and require more space within the FRA. By default, 24 hours of flashback logs are kept, but you can change this setting per your requirements.
+
+## Oracle Real Application Clusters
+
+[Oracle Real Application Clusters (RAC)](https://docs.oracle.com/en/database/oracle/oracle-database/19/racad/introduction-to-oracle-rac.html#GUID-5A1B02A2-A327-42DD-A1AD-20610B2A9D92) allows multiple interconnected servers to appear as one database service to end users and applications. This feature removes many points of failure and is a recognized high availability active/active solution for Oracle databases.
+
+As shown in the following figure from Oracle's [High Availability Overview and Best Practices](https://docs.oracle.com/en/database/oracle/oracle-database/19/haovw/ha-features.html), a single RAC database is presented to the application layer. The applications connect to the SCAN listener, which directs traffic to a specific database instance. RAC controls access from multiple instances to maintain data consistency across separate compute nodes.
+
+![Diagram showing an overview of the architecture of Oracle RAC.](media/oracle-high-availability/oracle-real-application-clusters.png)
+
+If one instance fails, the service continues on all other remaining instances. Each database deployed on the solution will be in a RAC configuration of n+1, where n is the minimum processing power required to support the service.
+
+Oracle Database services are used to allow connections to fail over between nodes when an instance fails transparently. Such failures may be planned or unplanned. Working with the application (fast application notification events), when an instance is made unavailable, the service is relocated to a surviving node. The service moves to a node specified in the service configuration as either preferred or available.
+
+Another key feature of Oracle Database services is only starting a service depending on its role. This feature is used when there is a Data Guard failover. All patterns deployed using Data Guard are required to link a database service to a Data Guard role.
+
+For example, two services could be created, MY\_DB\_APP and MY\_DB\_AS. The MY\_DB\_APP service is started only when the database instance is started with the Data Guard role of PRIMARY. MY\_DB\_AS is only started when the Data Guard role is PHYSICAL\_STANDBY. This configuration allows for applications to point to the \_APP service, while also reporting, which can be offloaded to Active Standby and pointed to the \_AS service.
+
+## Oracle Data Guard
+
+With Data Guard, you can maintain an identical copy of a database on separate physical hardware. Ideally, the hardware should be geographically separated. Data Guard places no limit on the distance, although distance has a bearing on modes of protection. Increased distance adds latency between sites, which can cause some options (such as synchronous replication) to no longer be viable.
+
+Data Guard offers advantages over storage-level replication:
+
+- As the replication is database-aware, only relevant traffic is replicated.
+- Certain workloads can generate high input/output on temporary tablespaces, which aren't required on standby and so aren't replicated.
+- Validation on the replicated blocks occurs at the standby database, ensuring that physical corruptions introduced on the primary database aren't replicated to the standby database.
+- Prevents logical intra-block corruptions and lost-write corruptions. It also eliminates the risk of mistakes made by storage administrators from replicating to the standby.
+Redo can be delayed for a pre-determined period, so user errors aren't immediately replicated to the standby.
+
+## Azure NetApp Files snapshots
+
+The NetApp Files storage solution used in BareMetal allows you to create snapshots of volumes. Snapshots allow you to revert a filesystem to a specific point in time quickly. Snapshot technologies allow recovery time objective (RTO) times that are only a fraction of the time associated with restoring a database backup.
+
+Snapshot functionality for Oracle databases is available through Azure NetApp SnapCenter. SnapCenter allows you to schedule and automate volume snapshot creation and restoration.
+
+## Recovery Manager
+
+Recovery Manager (RMAN) is the preferred utility for taking physical database backups. RMAN interacts with the database control file (or a centralized recovery catalog) to protect the various core components of the database, including:
+
+- Database datafiles
+- Archived redo logs
+- Database control files
+- Database initialization files (spfile)
+
+RMAN allows you to take hot or cold database backups. You can use these backups to create standby databases or to duplicate databases to clone environments. RMAN also has a restore validation function. This function reads a backup set and determines whether you can use it to recover the database to a specific point in time.
+
+Because RMAN is an Oracle-provided utility, it can read the internal structure of database files. This allows you to run physical and logical corruption checks during backup and restore operations. You can also recover database datafiles, and restore individual datafiles and tablespaces to a specific point in time. These are advantages RMAN offers over storage snapshots. RMAN backups provide a last line of defense against full data loss when you can't use snapshots.
+
+## Next steps
+
+Learn about options and recommendations to optimize protection and performance running Oracle on BareMetal Infrastructure:
+
+> [!div class="nextstepaction"]
+> [Options for Oracle BareMetal Infrastructure servers](options-considerations-high-availability.md)
baremetal-infrastructure Options Considerations High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/options-considerations-high-availability.md
+
+ Title: Options or Oracle BareMetal Infrastructure servers
+description: Learn about the options and considerations for Oracle BareMetal Infrastructure servers.
++ Last updated : 04/15/2021++
+# Options for Oracle BareMetal Infrastructure servers
+
+In this article, we'll consider options and recommendations to get the highest level of protection and performance running Oracle on BareMetal Infrastructure servers.
+
+## Data Guard protection modes
+
+Data Guard offers protection unavailable solely through Oracle Real Applications Cluster (RAC), logical replication (such as GoldenGate), and storage-based replication.
+
+| Protection mode | Description |
+| | |
+| **Maximum Performance** | The default protection mode. It provides the highest level of protection without impacting the performance of the primary database. Data is considered committed as soon as it has been written to the primary database redo stream. It's then replicated to the standby database asynchronously. Generally, the standby database receives it within seconds, but no guarantee is given to that effect. This mode typically meets business requirements (alongside lag monitoring) without needing low latency network connectivity between the primary and standby sites.<br /><br />It provides the best operational persistence; however, it doesn't guarantee zero data loss. |
+| **Maximum Availability** | Provides the highest level of protection without impacting the primary database's availability. Data is never considered committed to the primary database until it has also been committed to at least one standby database. If the primary database can't write the redo changes to at least one standby database, it falls back to Maximum Performance mode rather than become unavailable. <br /><br />It allows the service to continue if the standby site is unavailable. If only one site is working, then only one copy of the data will be maintained until the second site is online and sync is re-established. |
+| **Maximum Protection** | Provides a similar protection level to maximum availability. The primary database shuts down with the added feature if it can't write the redo changes to at least one standby database. This ensures that data loss can't occur, but at the expense of more fragile availability. |
+
+>[!IMPORTANT]
+>If you need a recovery point objective (RPO) of zero, we recommend the Maximum Availability configuration. Then RPO can be guaranteed even when multiple failures occur. For example, even in the case of a network outage from the primary database followed by the loss of the primary database sometime afterward while the network outage is still in effect.
+
+### Data Guard deployment patterns
+
+Oracle lets you configure multiple destinations for redo generation, allowing for multiple standby databases. The most common configuration is shown in the following figure, a single standby database in a different region.
++
+Data Guard is configured in Maximum Performance mode for a default deployment. This configuration provides near real-time data replication via asynchronous redo transport. The standby database doesn't need to run inside of a RAC deployment, but we recommend it meets the performance demands of the primary site.
+
+We recommend a deployment like that shown in the following figure for environments that require strict uptime or an RPO of zero. The Maximum Availability configuration consists of a local standby database applying redo in synchronous mode and a second standby database running in a remote region.
++
+You can create a local standby database when application performance will suffer by running the database and application servers in separate regions. In this configuration, a local standby database is used when planned or unplanned maintenance is needed on the primary cluster. You can run these databases with synchronous replication because they're in the same region, ensuring no data lost between them.
+
+### Data Guard configuration considerations
+
+The Data Guard Broker should be implemented, as it simplifies implementing a Data Guard configuration and ensures that best practices are adhered to. It provides performance monitoring functionality and greatly simplifies the switchover, failover, and reinstantiation procedures.
+
+Data Guard allows you to run an observer process, which monitors all databases in a Data Guard configuration to determine database availability. If a primary database fails, the Data Guard Observer can automatically start a failover to a standby database in the configuration. You can implement the Data Guard Observer with multiple observers based on the number of physical sites (up to three).
+
+This observer should be located on the infrastructure that will support the application tier. The primary Observer should always exist on the physical site where the primary database is not located. We recommend caution in automating failover operations triggered by a Data Guard Observer. First be sure your applications are designed and tested to provide acceptable service when the database runs in a separate location.
+
+If the application is only able to operate locally, failover to the secondary site must be manual. Environments that require high availability levels (99.99% or 99.999% uptime) should use both a local and remote standby database, as shown in the preceding figure. In these cases, the parameter FastStartFailoverTarget will only be set to the local standby database.
+
+For all applications that support cross-site application/database access, FastStartFailoverTarget is set to all standby databases in the Data Guard configuration.
+
+### Active Data Guard
+
+Oracle Active Data Guard (ADG) is a superset of basic Data Guard capabilities included with Oracle Database Enterprise Edition. It provides the added following features, which will be used across the Oracle Exadata deployment:
+
+- Unique corruption detection and automatic repair.
+- Rapid failover to the synchronized replica of production ΓÇô manual or automatic.
+- Offload production workload to a synchronized standby open read-only.
+- Database rolling upgrades and standby. First patching using physical standby.
+- Offload incremental backups to standby.
+- Zero data loss data recovery protection across any distance without impacting performance.
+
+The white paper available at [https://www.oracle.com/technetwork/database/availability/dg-adg-technical-overview-wp-5347548.pdf](https://www.oracle.com/technetwork/database/availability/dg-adg-technical-overview-wp-5347548.pdf) provides a good overview of the preceding features as shown in the following figure.
++
+## Backup recommendations
+
+Be sure to back up your databases. Use the restore and recover features to restore a database to the same or another system, or to recover database files.
+
+It is important to create a backup recovery strategy to protect Oracle Database Appliance databases from data loss. Such loss could result from a physical problem with a disk that causes a failure of a read or write to a disk file required to run the database. User error can also cause data loss. The backup feature provides the ability to **point in time restore (PITR) restore the database, System Change Number (SCN) recovery, and latest recovery**. You can create a backup policy in the Browser User Interface or from the command-line interface.
+
+The following backup options are available:
+
+- Back up to NFS storage volume (Fast Recovery Area-FRA- /u98).
+- Using Azure NetApp Files SnapCenter ΓÇô snapshot.
+
+Process to consider:
+
+- Manual or automatic backups.
+- Automatic backups are written to NFS storage volumes (for example, /u98).
+- Backups run between 12:00 AM ΓÇô 6:00 AM in the database system's time zone.
+- Present retention periods: 7, 15, 30, 45, and 60 days.
+
+- Recover database from a backup stored in Object storage:
+ - To the last known good state with the least possible data loss.
+ - Using timestamp specified.
+ - Using the SCN specified.
+ - BackupReport ΓÇô _uses SCN from backup report instead of specified SCN_.
++
+### Backup policy
+
+The backup policy defines the backup details. When you create a backup policy, you define the destination for database backups FRA (NFS location) and define the recovery window.
+
+By default, the BASIC compression algorithm is used. When using LOW, MEDIUM, or HIGH compression algorithms for Disk or NFS backup policy, there are license considerations.
+
+### Backup levels
+
+Specify the backup level when you take a backup.
+
+- Level 0 - Full
+- Level 1 ΓÇô Incremental
+- LongTerm/ Archievelog - except for backup retention policy, use non-FRA location (for example, /u95).
+
+## Next steps
+
+Learn how to recover your Oracle database when a failure does occur:
+
+> [!div class="nextstepaction"]
+> [Recover your Oracle database on Azure BareMetal Infrastructure](oracle-high-availability-recovery.md)
baremetal-infrastructure Oracle Baremetal Ethernet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/oracle-baremetal-ethernet.md
The default configuration comes with one client IP interface (eth1), connecting
| **NIC logical interface** | **Name with RHEL OS** | **Use case** | | | | |
-| A | eth1.tenant | Client to BareMetal instance |
-| C | eth2.tenant | Node-to-storage; supports the coordination and access to the storage controllers for management of the storage environment. |
-| B | eth3.tenant | Node-to-node (Private interconnect) |
-| C | eth4.tenant | Reserved/ iSCSI |
-| C | eth5.tenant | Reserved/ Log Backup |
-| C | eth6.tenant | Node-to-storage_Data Backup (RMAN, Snapshot) |
-| C | eth7.tenant | Node-to-storage_dNFS-Pri; provides connectivity with the NetApp storage array. |
-| C | eth8.tenant | Node-to-storage_dNFS-Sec; provides connectivity with the NetApp storage array. |
-| D | eth9.tenant | DR connectivity for Global reach setup for accessing BMI in another region. |
-| A | \*eth10.tenant | \* Client to BareMetal instance
+| A | net1.tenant | Client to BareMetal instance |
+| C | net2.tenant | Node-to-storage; supports the coordination and access to the storage controllers for management of the storage environment. |
+| B | net3.tenant | Node-to-node (Private interconnect) |
+| C | net4.tenant | Reserved/ iSCSI |
+| C | net5.tenant | Reserved/ Log Backup |
+| C | net6.tenant | Node-to-storage_Data Backup (RMAN, Snapshot) |
+| C | net7.tenant | Node-to-storage_dNFS-Pri; provides connectivity with the NetApp storage array. |
+| C | net8.tenant | Node-to-storage_dNFS-Sec; provides connectivity with the NetApp storage array. |
+| D | net9.tenant | DR connectivity for Global reach setup for accessing BMI in another region. |
+| A | \*net10.tenant | \* Client to BareMetal instance
| If necessary, you can define more network interface controller (NIC) cards on your own. However, the configurations of existing NICs *can't* be changed.
baremetal-infrastructure Oracle Baremetal Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/oracle-baremetal-skus.md
Title: BareMetal SKUs for Oracle workloads
description: Learn about the SKUs for the Oracle BareMetal Infrastructure workloads. Previously updated : 04/14/2021 Last updated : 04/15/2021 # BareMetal SKUs for Oracle workloads
BareMetal Infrastructure for Oracle SKUs range from two sockets up to four socke
| **Oracle Certified** **hardware** | **Model** | **Total Memory** | **Storage** | **Availability** | | | | | | |
-| YES | SAP HANA on Azure S32m- 2 x Intel® Xeon® Processor I623416 CPU cores and 32 CPU threads | 1.5 TB | | Available |
-| YES | SAP HANA on Azure S64m- 4 x Intel® Xeon® Processor I623432 CPU cores and 64 CPU threads | 3.0 TB | | Available |
-| YES | SAP HANA on Azure S96– 2 x Intel® Xeon® Processor E7-8890 v448 CPU cores and 96 CPU threads | 768 GB | 3.0 TB | Available |
+| YES | SAP HANA on Azure S32m- 2 x Intel® Xeon® I6234 Processor 16 CPU cores and 32 CPU threads | 1.5 TB | | Available |
+| YES | SAP HANA on Azure S64m- 4 x Intel® Xeon® I6234 Processor 32 CPU cores and 64 CPU threads | 3.0 TB | | Available |
+| YES | SAP HANA on Azure S96– 2 x Intel® Xeon® E7-8890 v4 Processor 48 CPU cores and 96 CPU threads | 768 GB | 3.0 TB | Available |
| YES | SAP HANA on Azure S224 – 4 x Intel® Xeon® Platinum 8276 processor 112 CPU cores and 224 CPU threads | 3.0 TB | 6.3 TB | Available | | YES | SAP HANA on Azure S224m– 4 x Intel® Xeon® Platinum 8276 processor 112 CPU cores and 224 CPU threads | 6.0 TB | 10.5 TB | Available |
baremetal-infrastructure Oracle High Availability Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/workloads/oracle/oracle-high-availability-recovery.md
+
+ Title: Recover your Oracle database on Azure BareMetal Infrastructure
+description: Learn how you can recover your Oracle database on the Azure BareMetal Infrastructure.
++ Last updated : 04/15/2021++
+# Recover your Oracle database on Azure BareMetal Infrastructure
+
+While no single technology protects from all failure scenarios, combining features offers database administrators the ability to recover their database in nearly any situation.
+
+## Causes of database failure
+
+Database failures can occur for many reasons but typically fall under several categories:
+
+- Data manipulation errors.
+- Loss of online redo logs.
+- Loss of database control files.
+- Loss of database datafiles.
+- Physical data corruption.
+
+## Choose your method of recovery
+
+The type of recovery depends on the type of failure. Let's say an object is dropped or data is incorrectly modified. Then the quickest solution is usually to do a flashback database operation. In other cases, recovering through an Azure NetApp Files snapshot may provide the recovery you want. The following figure's decision tree represents common failure and recovery scenarios if all data protection options described above are implemented.
++
+Keep in mind this example decision tree is only viewed from the lens of a database administrator. Each deployment may have different requirements that could change the order of choices. For example, performing a database role switch to a different region via Data Guard may have an adverse effect on application performance. It could give the snapshot recovery method a lower RTO. To ensure RTO/RPO requirements are met, we recommend you do these operations and create documented procedures to execute them when needed.
+
+## Next steps
+
+Learn more about BareMetal Infrastructure:
+
+- [What is BareMetal Infrastructure on Azure?](../../concepts-baremetal-infrastructure-overview.md)
+- [Connect BareMetal Infrastructure instances in Azure](../../connect-baremetal-infrastructure.md)
batch Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/disk-encryption.md Binary files differ
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 3/28/2021 Last updated : 4/15/2021 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## April 2021 Guest OS
+
+>[!NOTE]
+
+>The April Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the April Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 21-04 | [5001342] | Latest Cumulative Update(LCU) | 6.30 | Apr 13, 2021 |
+| Rel 21-04 | [4580325] | Flash update | 3.96, 4.89, 5.54, 6.30 | Oct 13, 2020 |
+| Rel 21-04 | [5000800] | IE Cumulative Updates | 2.109, 3.96, 4.89 | Mar 9, 2021 |
+| Rel 21-04 | [5001347] | Latest Cumulative Update(LCU) | 5.54 | Apr 13, 2021 |
+| Rel 21-04 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.109 | Oct 13, 2020 |
+| Rel 21-04 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.109 | Oct 13, 2020 |
+| Rel 21-04 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.89 | Oct 13, 2020 |
+| Rel 21-04 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.89 | Oct 13, 2020 |
+| Rel 21-04 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.96 | Oct 13, 2020 |
+| Rel 21-04 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.96 | Oct 13, 2020 |
+| Rel 21-04 | [4601060] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.30 | Feb 9, 2021 |
+| Rel 21-04 | [5001335] | Monthly Rollup  | 2.109 | Mar 9, 2021 |
+| Rel 21-04 | [5001387] | Monthly Rollup  | 3.96 | Apr 13, 2021 |
+| Rel 21-04 | [5001382] | Monthly Rollup  | 4.89 | Apr 13, 2021 |
+| Rel 21-04 | [5001401] | Servicing Stack update  | 3.96 | Apr 13, 2021 |
+| Rel 21-04 | [5001403] | Servicing Stack update  | 4.89 | Apr 13, 2021 |
+| Rel 21-04 OOB | [4578013] | Standalone Security Update  | 4.89 | Aug 19, 2020 |
+| Rel 21-04 | [5001402] | Servicing Stack update  | 5.54 | Apr 13, 2021 |
+| Rel 21-04 | [4592510] | Servicing Stack update  | 2.109 | Dec 8, 2020 |
+| Rel 21-04 | [5001404] | Servicing Stack update  | 6.30 | Apr 13, 2021 |
+| Rel 21-04 | [4494175] | Microcode  | 5.54 | Sep 1, 2020 |
+| Rel 21-04 | [4494174] | Microcode  | 6.30 | Sep 1, 2020 |
+
+[5001342]: https://support.microsoft.com/kb/5001342
+[4580325]: https://support.microsoft.com/kb/4580325
+[5000800]: https://support.microsoft.com/kb/5000800
+[5001347]: https://support.microsoft.com/kb/5001347
+[4578952]: https://support.microsoft.com/kb/4578952
+[4578955]: https://support.microsoft.com/kb/4578955
+[4578953]: https://support.microsoft.com/kb/4578953
+[4578956]: https://support.microsoft.com/kb/4578956
+[4578950]: https://support.microsoft.com/kb/4578950
+[4578954]: https://support.microsoft.com/kb/4578954
+[4601060]: https://support.microsoft.com/kb/4601060
+[5001335]: https://support.microsoft.com/kb/5001335
+[5001387]: https://support.microsoft.com/kb/5001387
+[5001382]: https://support.microsoft.com/kb/5001382
+[5001401]: https://support.microsoft.com/kb/5001401
+[5001403]: https://support.microsoft.com/kb/5001403
+[4578013]: https://support.microsoft.com/kb/4578013
+[5001402]: https://support.microsoft.com/kb/5001402
+[4592510]: https://support.microsoft.com/kb/4592510
+[5001404]: https://support.microsoft.com/kb/5001404
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
++ ## March 2021 Guest OS
cognitive-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-logging.md
To optimize logs uploaded to a remote endpoint, such as Azure Blob Storage, we r
Log level configuration allows you to control the verbosity of the generated logs. Supported log levels are: `none`, `verbose`, `info`, `warning`, and `error`. The default log verbose level for both nodes and platform is `info`. Log levels can be modified globally by setting the `ARCHON_LOG_LEVEL` environment variable to one of the allowed values.
-It can also be set through the IoT Edge Module Twin document either globally, for all deployed skills, or for every specific skill by setting the values for `platformLogLevel` and `nodeLogLevel` as shown below.
+It can also be set through the IoT Edge Module Twin document either globally, for all deployed skills, or for every specific skill by setting the values for `platformLogLevel` and `nodesLogLevel` as shown below.
```json {
It can also be set through the IoT Edge Module Twin document either globally, fo
}, "graphs": { "samplegraph": {
- "nodeLogLevel": "verbose",
+ "nodesLogLevel": "verbose",
"platformLogLevel": "verbose" } }
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [Swift](#tab/swift)
-For more information, see <a href="https://docs.microsoft.com/swift/cognitive-services/speech/spxspeechsynthesizer#addbookmarkreachedeventhandler" target="_blank"> `addBookmarkReachedEventHandler` </a>.
+For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechsynthesizer" target="_blank"> `addBookmarkReachedEventHandler` </a>.
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-container-support.md
Previously updated : 04/12/2021 Last updated : 04/16/2021 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
Azure Cognitive Services containers provide the following set of Docker containe
|--|--|--|--| | [LUIS][lu-containers] | **LUIS** ([image](https://go.microsoft.com/fwlink/?linkid=2043204&clcid=0x409)) | Loads a trained or published Language Understanding model, also known as a LUIS app, into a docker container and provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload these back to the [LUIS portal](https://www.luis.ai) to improve the app's prediction accuracy. | Generally available | | [Text Analytics][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://go.microsoft.com/fwlink/?linkid=2018757&clcid=0x409)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Preview |
-| [Text Analytics][ta-containers-language] | **Text Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Preview |
+| [Text Analytics][ta-containers-language] | **Text Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available |
| [Text Analytics][ta-containers-sentiment] | **Sentiment Analysis v3** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available | | [Text Analytics][ta-containers-health] | **Text Analytics for health** | Extract and label medical information from unstructured clinical text. | Gated preview. [Request access][request-access]. |
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/what-are-cognitive-services.md
The following sections in this article provides a list of services that are part
|[Custom Vision Service](./custom-vision-service/index.yml "Custom Vision Service")|The Custom Vision Service lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels to images, based on their visual characteristics. | |[Face](./face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition. See [Face quickstart](./face/quickstarts/client-libraries.md) to get started with the service.| |[Form Recognizer](./form-recognizer/index.yml "Form Recognizer")|Form Recognizer identifies and extracts key-value pairs and table data from form documents; then outputs structured data including the relationships in the original file. See [Form Recognizer quickstart](./form-recognizer/quickstarts/client-library.md) to get started.|
-|[Video Indexer](../media-services/video-indexer/video-indexer-overview.md "Video Indexer")|Video Indexer enables you to extract insights from your video. See [Video Indexer quickstart](/media-services/video-indexer/video-indexer-get-started.md) to get started.|
+|[Video Indexer](../media-services/video-indexer/video-indexer-overview.md "Video Indexer")|Video Indexer enables you to extract insights from your video. See [Video Indexer quickstart](/azure/media-services/video-indexer/video-indexer-get-started) to get started.|
## Speech APIs
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
# What is Azure Communication Services?
-> [!IMPORTANT]
-> Applications that you build using Azure Communication Services can talk to Microsoft Teams. To learn more, visit our [Teams Interop](./quickstarts/voice-video-calling/get-started-teams-interop.md) documentation.
+Azure Communication Services allows you to easily add real-time voice, video, and telephone communication to your applications. Communication Services SDKs also allow you to add SMS functionality to your communications solutions. Azure Communication Services is identity agnostic and you have complete control over how end users are identified and authenticated. You can connect humans to the communication data plane or services (bots).
+Applications include:
-Azure Communication Services allows you to easily add real-time multimedia voice, video, and telephony-over-IP communications features to your applications. The Communication Services SDK libraries also allow you to add chat and SMS functionality to your communications solutions.
+- **Business to Consumer (B2C).** A business' employees and services can interact with consumers using voice, video, and rich text chat in a custom browser or mobile application. An organization can send and receive SMS messages, or operate an interactive voice response system (IVR) using a phone number you acquire through Azure. [Integration with Microsoft Teams](./quickstarts/voice-video-calling/get-started-teams-interop.md) allows consumers to join Teams meetings hosted by employees; ideal for remote healthcare, banking, and product support scenarios where employees might already be familiar with Teams.
+- **Consumer to Consumer.** Build engaging social spaces for consumer-to-consumer interaction with voice, video, and rich text chat. Any type of user interface can be built on Azure Communication Services SDKs, but complete application samples and UI assets are available to help you get started quickly.
-<br>
-
-> [!VIDEO https://www.youtube.com/embed/apBX7ASurgM]
-
-<br>
-<br>
-
-You can use Communication Services for voice, video, text, and data communication in a variety of scenarios:
--- Browser-to-browser, browser-to-app, and app-to-app communication-- Users interacting with bots or other services-- Users and bots interacting over the public switched telephony network-
-Mixed scenarios are supported. For example, a Communication Services application may have users speaking from browsers and traditional telephony devices at the same time. Communication Services may also be combined with Azure Bot Service to build bot-driven interactive voice response (IVR) systems.
+To learn more, check out our [Microsoft Mechanics video](https://www.youtube.com/watch?v=apBX7ASurgM) or the resources linked below.
## Common scenarios
-The following resources are a great place to get started with Azure Communication Services.
<br> | Resource |Description | | | |
-|**[Create a Communication Services resource](./quickstarts/create-communication-resource.md)**|You can begin using Azure Communication Services by using the Azure portal or Communication Services SDK to provision your first Communication Services resource. Once you have your Communication Services resource connection string, you can provision your first user access tokens.|
-|**[Get a phone number](./quickstarts/telephony-sms/get-phone-number.md)**|You can use Azure Communication Services to provision and release telephone numbers. These telephone numbers can be used to initiate outbound calls and build SMS communications solutions.|
+|**[Create a Communication Services resource](./quickstarts/create-communication-resource.md)**|Begin using Azure Communication Services by using the Azure portal or Communication Services SDK to provision your first Communication Services resource. Once you have your Communication Services resource connection string, you can provision your first user access tokens.|
+|**[Get a phone number](./quickstarts/telephony-sms/get-phone-number.md)**|You can use Azure Communication Services to provision and release telephone numbers. These telephone numbers can be used to initiate or receive phone calls and build SMS solutions.|
+|**[Send an SMS from your app](./quickstarts/telephony-sms/send.md)**|The Azure Communication Services SMS SDK is used send and receive SMS messages from service applications.|
-After creating an Communication Services resource you can start building client scenarios, such as voice and video calling or text chat.
+After creating a Communication Services resource you can start building client scenarios, such as voice and video calling or text chat:
| Resource |Description | | | |
-|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens are used to authenticate your services against your Azure Communication Services resource. These tokens are provisioned and reissued using the Communication Services SDK.|
-|**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your apps using the Calling SDK. This library is powered by WebRTC and allows you to establish peer-to-peer, multimedia, real-time communications within your applications.|
+|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens are used to authenticate clients against your Azure Communication Services resource. These tokens are provisioned and reissued using the Communication Services SDK.|
+|**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your browser or native apps using the Calling SDK. |
|**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.|
-|**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat SDK can be used to integrate real-time chat into your applications.|
-|**[Send an SMS from your app](./quickstarts/telephony-sms/send.md)**|The Azure Communication Services SMS SDK allows you to send and receive SMS messages from your .NET and JavaScript applications.|
+|**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat SDK is used to add rich real-time text chat into your applications.|
## Samples
-The following samples demonstrate end-to-end utilization of the Azure Communication Services SDK libraries. Feel free to use these samples to bootstrap your own Communication Services solutions.
+The following samples demonstrate end-to-end usage of the Azure Communication Services. Use these samples to bootstrap your own Communication Services solutions.
<br> | Sample name | Description | | | |
-|**[The Group Calling Hero Sample](./samples/calling-hero-sample.md)**|See how the Communication Services SDK libraries can be used to build a group calling experience.|
-|**[The Group Chat Hero Sample](./samples/chat-hero-sample.md)**|See how the Communication Services SDK libraries can be used to build a group chat experience.|
+|**[The Group Calling Hero Sample](./samples/calling-hero-sample.md)**| Download a designed application sample for group calling for browsers, iOS, and Android devices. |
+|**[The Group Chat Hero Sample](./samples/chat-hero-sample.md)**| Download a designed application sample for group text chat for browsers. |
## Platforms and SDK libraries
-The following resources will help you learn about the Azure Communication Services SDK libraries:
+Learn more about the Azure Communication Services SDKs with the resources below. REST APIs are available for most functionality if you want to build your own clients or otherwise access the service over the Internet.
| Resource | Description | | | |
The following resources will help you learn about the Azure Communication Servic
## Other Microsoft Communication Services
-There are two other Microsoft communication products you may consider leveraging that are not directly interoperable with Communication Services at this time:
+There are two other Microsoft communication products you may consider using that are not directly interoperable with Communication Services at this time:
- [Microsoft Graph Cloud Communication APIs](/graph/cloud-communications-concept-overview) allow organizations to build communication experiences tied to Azure Active Directory users with Microsoft 365 licenses. This is ideal for applications tied to Azure Active Directory or where you want to extend productivity experiences in Microsoft Teams. There are also APIs to build applications and customization within the [Teams experience.](/microsoftteams/platform/?preserve-view=true&view=msteams-client-js-latest)
cosmos-db Migrate Relational To Cosmos Db Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/migrate-relational-to-cosmos-db-sql-api.md
We can also use Spark in [Azure Databricks](https://azure.microsoft.com/services
> For clarity and simplicity, the code snippets below include dummy database passwords explicitly inline, but you should always use Azure Databricks secrets. >
-First, we create and attach the required [SQL connector](https://docs.databricks.com/data/data-sources/sql-databases-azure.html) and [Azure Cosmos DB connector](https://docs.databricks.com/data/data-sources/azure/cosmosdb-connector.html) libraries to our Azure Databricks cluster. Restart the cluster to make sure libraries are loaded.
+First, we create and attach the required [SQL connector](/connectors/sql/) and [Azure Cosmos DB connector](https://docs.databricks.com/data/data-sources/azure/cosmosdb-connector.html) libraries to our Azure Databricks cluster. Restart the cluster to make sure libraries are loaded.
:::image type="content" source="./media/migrate-relational-to-cosmos-sql-api/databricks1.png" alt-text="Screenshot that shows where to create and attach the required SQL connector and Azure Cosmos DB connector libraries to our Azure Databricks cluster.":::
Next, we present two samples, for Scala and Python.
Here, we get the results of the SQL query with ΓÇ£FOR JSONΓÇ¥ output into a DataFrame: ```scala
-// Connect to Azure SQL https://docs.databricks.com/data/data-sources/sql-databases-azure.html
+// Connect to Azure SQL /connectors/sql/
import com.microsoft.azure.sqldb.spark.config.Config import com.microsoft.azure.sqldb.spark.connect._ val configSql = Config(Map(
In either approach, at the end, we should get properly saved embedded OrderDetai
## Next steps * Learn about [data modeling in Azure Cosmos DB](./modeling-data.md)
-* Learn [how to model and partition data on Azure Cosmos DB](./how-to-model-partition-example.md)
+* Learn [how to model and partition data on Azure Cosmos DB](./how-to-model-partition-example.md)
cosmos-db Mongodb Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-time-to-live.md
globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: 20.5}) //TTL val
globaldb:PRIMARY> db.coll.insert({id:1, location: "Paris", ttl: NumberLong(2147483649)}) //TTL value is greater than Int32.MaxValue (2,147,483,648). ```
-## How to activate the per-document TTL feature
-
-The per-document TTL feature can be activated with Azure Cosmos DB's API for MongoDB.
-- ## Next steps * [Expire data in Azure Cosmos DB automatically with time to live](../cosmos-db/time-to-live.md) * [Indexing your Cosmos database configured with Azure Cosmos DB's API for MongoDB](../cosmos-db/mongodb-indexing.md)
cost-management-billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/overview.md
Title: Overview of Cloudyn in Azure
description: Cloudyn is a multi-cloud cost management solution that helps you use Azure and other cloud resources. Previously updated : 10/23/2020 Last updated : 04/15/2021
# What is the Cloudyn service?
-Cloudyn, a Microsoft subsidiary, allows you to track cloud usage and expenditures for your Azure resources. Easy-to-understand dashboard reports help with cost allocation and showbacks/chargebacks as well. Cloudyn helps optimize your cloud spending by identifying underutilized resources that you can then manage and adjust.
-
-To watch an introductory video, see [Introduction to Azure Cloudyn](https://azure.microsoft.com/resources/videos/azure-cost-management-overview-and-demo/).
-
-Azure Cost Management offers similar functionality to Cloudyn. Azure Cost Management is a native Azure cost management solution. It helps you analyze costs, create and manage budgets, export data, and review and act on optimization recommendations to save money. For more information, see [Azure Cost Management](../cost-management-billing-overview.md).
-
[!INCLUDE [cloudyn-note](../../../includes/cloudyn-note.md)]
-## Monitor usage and spending
-
-Monitoring your usage and spending is critically important for cloud infrastructures because organizations pay for the resources they consume over time. When usage exceeds agreement thresholds, unexpected cost overages can quickly occur. A few important factors can make ad hoc monitoring difficult. First, projecting costs based on average usage assumes that your consumption remains consistent over a given billing period. Second, when costs are near or exceed your budget, it's important you get notifications proactively to adjust your spending. And, cloud service providers might not offer cost projection vs. thresholds or period to period comparison reports.
-
-Reports help you monitor spending to analyze and track cloud usage, costs, and trends. Using Over Time reports, you can detect anomalies that differ from normal trends. Inefficiencies in your cloud deployment are visible in optimization reports. You can also notice inefficiencies in cost analysis reports.
-
-## Manage costs
-
-Historical data can help manage costs when you analyze usage and costs over time to identify trends. Trends are then used to forecast future spending. Cloudyn also includes useful projected cost reports.
-
-Cost allocation manages costs by analyzing your costs based on your tagging policy. You can use tags on your custom accounts, resources, and entities to refine cost allocation. Category Manager organizes your tags to help provide additional governance. And, you use cost allocation for showback/chargeback to show resource utilization and associated costs to influence consumption behaviors or charge tenant customers.
-
-Access control helps manage costs by ensuring that users and teams access only the cost management data that they needed. You use entity structure, user management, and scheduled reports with recipient lists to assign access.
-
-Alerting helps manage costs by notifying you automatically when unusual spending or overspending occurs. Alerts can also notify other stakeholders automatically for spending anomalies and overspending risks. Various reports support alerts based on budget and cost thresholds. However, alerts are not currently supported for CSP partner accounts or subscriptions.
-
-## Improve efficiency
-
-You can determine optimal VM usage and identify idle VMs or remove idle VMs and unattached disks with Cloudyn. Using information in Sizing Optimization and Inefficiency reports, you can create a plan to down-size or remove idle VMs. However, optimization reports are not currently supported for CSP partner accounts or subscriptions.
-
-If you provisioned AWS Reserved Instances, you can improve your reserved instances utilization with Optimization reports where you can view buying recommendations, modify unused reservations, and plan provisioning.
-- ## Next steps - [Review usage and costs](tutorial-review-usage.md)
cost-management-billing Transfer Subscriptions Subscribers Csp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md
Previously updated : 03/24/2021 Last updated : 04/15/2021
To transfer any other Azure subscriptions to a CSP partner, the subscriber needs
## Transfer CSP subscription to other offer
-To transfer any other subscriptions from a CSP Partner to any other Azure offer, the subscriber needs to move resources between source CSP subscriptions and target subscriptions.
+To transfer any other subscriptions from a CSP Partner to any other Azure offer, the subscriber needs to move resources between source CSP subscriptions and target subscriptions. This is work done by a partner and a customer - it is not work done by a Microsoft representative.
-1. Create target Azure subscriptions.
+1. The customer creates target Azure subscriptions.
1. Ensure that the source and target subscriptions are in the same Azure Active Directory (Azure AD) tenant. For more information about changing an Azure AD tenant, see [Associate or add an Azure subscription to your Azure Active Directory tenant](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md). Note that the change directory option isn't supported for the CSP subscription. For example, you're transferring from a CSP to a pay-as-you-go subscription. You need change the directory of the pay-as-you-go subscription to match the directory.
To transfer any other subscriptions from a CSP Partner to any other Azure offer,
> - When you associate a subscription to a different directory, users that have roles assigned using [Azure RBAC](../../role-based-access-control/role-assignments-portal.md) lose their access. Classic subscription administrators, including Service Administrator and Co-Administrators, also lose access. > - Policy Assignments are also removed from a subscription when the subscription is associated with a different directory.
-1. The user account that you use to do the transfer must have [Azure RBAC](add-change-subscription-administrator.md) owner access on both subscriptions.
+1. The customer user account that you use to do the transfer must have [Azure RBAC](add-change-subscription-administrator.md) owner access on both subscriptions.
1. Before you begin, [validate](/rest/api/resources/resources/validatemoveresources) that all Azure resources can move from the source subscription to the destination subscription. > [!IMPORTANT] > - Some Azure resources can't move between subscriptions. To view the complete list of Azure resource that can move, see [Move operation support for resources](../../azure-resource-manager/management/move-support-resources.md).
cost-management-billing Reservation Utilization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/reservation-utilization.md
+
+ Title: View Azure reservation utilization
+description: Learn how to get reservation utilization and details.
+++++ Last updated : 04/15/2021+++
+# View reservation utilization after purchase
+
+You can view reservation utilization percentage and the resources that used the reservation in the Azure portal and in the Cost Management Power BI app.
+
+## View utilization in the Azure portal with Azure RBAC access
+
+To view reservation utilization, you must have Azure RBAC access to the reservation or you must have elevated access to manage all Azure subscriptions and management groups.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to [Reservations](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade).
+1. The list shows all the reservations where you have the Owner or Reader role. Each reservation shows the last known utilization percentage.
+1. Select the utilization percentage to see the utilization history and details. The following video shows an example.
+ > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4sYwk]
+
+## View utilization as billing administrator
+
+An Enterprise Agreement (EA) administrator or a Microsoft Customer Agreement (MCA) billing administrator can view the utilization from **Cost Management + Billing**.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to **Cost Management + Billing** > **Reservations**.
+1. Select the utilization percentage to see the utilization history and details.
+
+## Get reservations and utilization using APIs, PowerShell, and CLI
+
+You can get the [reservation utilization](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-usage) using the Reserved Instance usage API.
+
+## See reservations and utilization in Power BI
+
+There are two options for Power BI users:
+
+- Azure Cost Management connector for Power BI Desktop - Reservation purchase date and utilization data are available in the [Azure Cost Management connector for Power BI Desktop](/power-bi/desktop-connect-azure-cost-management). Create the reports you want by using the connector.
+- Azure Cost Management Power BI App - Use the [Azure Cost Management Power BI App](https://appsource.microsoft.com/product/power-bi/costmanagement.azurecostmanagementapp) for pre-created reports that you can further customize.
+
+## Next steps
+
+- [Manage Azure Reservations](manage-reserved-vm-instance.md).
+- [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md).
+- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md).
+- [Understand reservation usage for CSP subscriptions](/partner-center/azure-reservations).
data-factory Concepts Data Flow Debug Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-debug-mode.md
Previously updated : 04/14/2021 Last updated : 04/16/2021 # Mapping data flow Debug Mode
Last updated 04/14/2021
Azure Data Factory mapping data flow's debug mode allows you to interactively watch the data shape transform while you build and debug your data flows. The debug session can be used both in Data Flow design sessions as well as during pipeline debug execution of data flows. To turn on debug mode, use the **Data Flow Debug** button in the top bar of data flow canvas or pipeline canvas when you have data flow activities.
-![Debug slider 1](media/data-flow/debugbutton.png "Debug slider")
+![Screenshot that shows where is the Debug slider 1](media/data-flow/debug-button.png)
-![Debug slider 2](media/data-flow/debug-button-4.png "Debug slider")
+![Screenshot that shows where is the Debug slider 2](media/data-flow/debug-button-4.png)
Once you turn on the slider, you will be prompted to select which integration runtime configuration you wish to use. If AutoResolveIntegrationRuntime is chosen, a cluster with eight cores of general compute with a default 60-minute time to live will be spun up. If you'd like to allow for more idle team before your session times out, you can choose a higher TTL setting. For more information on data flow integration runtimes, see [Data flow performance](concepts-data-flow-performance.md#ir).
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-execute-data-flow-activity.md
Previously updated : 04/14/2021 Last updated : 04/16/2021 # Data Flow activity in Azure Data Factory
You can parameterize the core count or compute type if you use the auto-resolve
To execute a debug pipeline run with a Data Flow activity, you must switch on data flow debug mode via the **Data Flow Debug** slider on the top bar. Debug mode lets you run the data flow against an active Spark cluster. For more information, see [Debug Mode](concepts-data-flow-debug-mode.md).
-![Debug button](media/data-flow/debug-button-3.png "Debug button")
+![Screenshot that shows where is the Debug button](media/data-flow/debug-button-3.png)
The debug pipeline runs against the active debug cluster, not the integration runtime environment specified in the Data Flow activity settings. You can choose the debug compute environment when starting up debug mode.
data-factory Data Flow New Branch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-new-branch.md
Previously updated : 01/08/2020 Last updated : 04/16/2021 # Creating a new branch in mapping data flow
A new branch can be added from the transformation list similar to other transfor
In the below example, the data flow is reading taxi trip data. Output aggregated by both day and vendor is required. Instead of creating two separate data flows that read from the same source, a new branch can be added. This way both aggregations can be executed as part of the same data flow. ![Screenshot shows the data flow with two branches from the source.](media/data-flow/new-branch.png "Adding a new branch")+
+> [!NOTE]
+> When clicking the plus (+) to add transformations to your graph, you will only see the New Branch option when there are subsequent transformation blocks. This is because New Branch creates a reference to the existing stream and requires further upstream processing to operate on. If you do not see the New Branch option, add a Derived Column or other transformation first, then return to the previous block and you will see New Branch as an option.
+
+## Next steps
+
+After branching, you may want to use the [data flow transformations](data-flow-transformation-overview.md)
data-factory Data Flow Sort https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-sort.md
Last updated 04/14/2020
The sort transformation allows you to sort the incoming rows on the current data stream. You can choose individual columns and sort them in ascending or descending order. > [!NOTE]
-> Mapping data flows are executed on spark clusters which distribute data across multiple nodes and partitions. If you choose to repartition your data in a subsequent transformation, you may lose your sorting due to reshuffling of data.
+> Mapping data flows are executed on spark clusters which distribute data across multiple nodes and partitions. If you choose to repartition your data in a subsequent transformation, you may lose your sorting due to reshuffling of data. The best way to maintain sort order in your data flow is to set single partition in the Optimize tab on the transformation and keep the Sort transformation as close to the Sink as possible.
## Configuration
data-factory Lab Data Flow Data Share https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/lab-data-flow-data-share.md
Previously updated : 04/14/2021 Last updated : 04/16/2021 # Data integration using Azure Data Factory and Azure Data Share
To turn on debug, click the **Data flow debug** slider in the top bar of data fl
![Portal configure 10](media/lab-data-flow-data-share/configure10.png)
-![Portal configure 11](media/lab-data-flow-data-share/configure11.png)
+![Screenshot that shows where is the Data flow debug slider.](media/lab-data-flow-data-share/configure-11.png)
## Ingest data using the copy activity
data-factory Tutorial Data Flow Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow-delta-lake.md
Previously updated : 04/14/2021 Last updated : 04/16/2021 # Transform data in delta lake using mapping data flows
In this step, you'll create a pipeline that contains a data flow activity.
![Screenshot that shows where you name your data flow when you create a new data flow.](media/tutorial-data-flow/activity2.png) 1. In the top bar of the pipeline canvas, slide the **Data Flow debug** slider on. Debug mode allows for interactive testing of transformation logic against a live Spark cluster. Data Flow clusters take 5-7 minutes to warm up and users are recommended to turn on debug first if they plan to do Data Flow development. For more information, see [Debug Mode](concepts-data-flow-debug-mode.md).
- ![Data Flow Activity](media/tutorial-data-flow/dataflow1.png)
+ ![Screenshot that shows where is the Data flow debug slider.](media/tutorial-data-flow/dataflow1.png)
## Build transformation logic in the data flow canvas
data-factory Tutorial Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow.md
Previously updated : 04/14/2021 Last updated : 04/16/2021 # Transform data using mapping data flows
Once you create your Data Flow, you'll be automatically sent to the data flow ca
![Screenshot that shows where you select New after you name your source.](media/tutorial-data-flow/dataflow3.png) 1. Choose **Azure Data Lake Storage Gen2**. Click Continue.
- ![Screenshot that shows the Azure Data Lake Storage Gen2 tile.](media/tutorial-data-flow/dataset1.png)
+ ![Screenshot that shows where is the Azure Data Lake Storage Gen2 tile.](media/tutorial-data-flow/dataset1.png)
1. Choose **DelimitedText**. Click Continue. ![Screenshot that shows the DelimitedText tile.](media/tutorial-data-flow/dataset2.png)
databox-online Azure Stack Edge Gpu Deploy Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-arc-data-controller.md
Previously updated : 03/08/2021 Last updated : 04/15/2021 # Deploy Azure Data Services on your Azure Stack Edge Pro GPU device
This article describes the process of creating an Azure Arc Data Controller and then deploying Azure Data Services on your Azure Stack Edge Pro GPU device.
-Azure Arc Data Controller is the local control plane that enables Azure Data Services in customer-managed environments. Once you have created the Azure Arc Data Controller on the Kubernetes cluster that runs on your Azure Stack Edge Pro device, you can deploy Azure Data Services such as SQL Managed Instance (Preview) on that data controller.
+Azure Arc Data Controller is the local control plane that enables Azure Data Services in customer-managed environments. Once you have created the Azure Arc Data Controller on the Kubernetes cluster that runs on your Azure Stack Edge Pro GPU device, you can deploy Azure Data Services such as SQL Managed Instance (Preview) on that data controller.
The procedure to create Data Controller and then deploy an SQL Managed Instance involves the use of PowerShell and `kubectl` - a native tool that provides command-line access to the Kubernetes cluster on the device.
The procedure to create Data Controller and then deploy an SQL Managed Instance
Before you begin, make sure that:
-1. You've access to an Azure Stack Edge Pro device and you've activated your device as described in [Activate Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
+1. You've access to an Azure Stack Edge Pro GPU device and you've activated your device as described in [Activate Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
-1. You've enabled the compute role on the device. A Kubernetes cluster was also created on the device when you configured compute on the device as per the instructions in [Configure compute on your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-configure-compute.md).
+1. You've enabled the compute role on the device. A Kubernetes cluster was also created on the device when you configured compute on the device as per the instructions in [Configure compute on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-configure-compute.md).
1. You have the Kubernetes API endpoint from the **Device** page of your local web UI. For more information, see the instructions in [Get Kubernetes API endpoint](azure-stack-edge-gpu-deploy-configure-compute.md#get-kubernetes-endpoints).
The data controller is a collection of pods that are deployed to your Kubernetes
The deployment may take approximately 5 minutes to complete. > [!NOTE]
- > The data controller created on Kubernetes cluster on your Azure Stack Edge Pro device works only in the disconnected mode in the current release.
+ > The data controller created on Kubernetes cluster on your Azure Stack Edge Pro GPU device works only in the disconnected mode in the current release. The disconnected mode is for the Data Controller and not for your device.
### Monitor data creation status
kubectl delete ns <Name of your namespace>
## Next steps -- [Deploy a stateless application on your Azure Stack Edge Pro](./azure-stack-edge-gpu-deploy-stateless-application-kubernetes.md).
+- [Deploy a stateless application on your Azure Stack Edge Pro](./azure-stack-edge-gpu-deploy-stateless-application-kubernetes.md).
databox-online Azure Stack Edge Gpu Deploy Vm Specialized Image Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-vm-specialized-image-powershell.md
Previously updated : 03/30/2021 Last updated : 04/15/2021
-#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
+#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro GPU device so that I can deploy VMs on the device.
-# Deploy a VM from a specialized image on your Azure Stack Edge Pro device via Azure PowerShell
+# Deploy a VM from a specialized image on your Azure Stack Edge Pro GPU device via Azure PowerShell
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-This article describes the steps required to deploy a virtual machine (VM) on your Azure Stack Edge Pro device from a specialized image.
+This article describes the steps required to deploy a virtual machine (VM) on your Azure Stack Edge Pro GPU device from a specialized image.
-## About specialized images
+To prepare a generalized image for deploying VMs in Azure Stack Edge Pro GPU, see [Prepare generalized image from Windows VHD](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md) or [Prepare generalized image from an ISO](azure-stack-edge-gpu-prepare-windows-generalized-image-iso.md).
-A Windows VHD or VHDX can be used to create a *specialized* image or a *generalized* image. The following table summarizes key differences between the *specialized* and the *generalized* images.
--
-|Image type |Generalized |Specialized |
-||||
-|Target |Deployed on any system | Targeted to a specific system |
-|Setup after boot | Setup required at first boot of the VM. | Setup not needed. <br> Platform turns on the VM. |
-|Configuration |Hostname, admin-user, and other VM-specific settings required. |Pre-configured. |
-|Used to |Create multiple new VMs from the same image. |Migrate a specific machine or restoring a VM from previous backup. |
+## About VM images
+A Windows VHD or VHDX can be used to create a *specialized* image or a *generalized* image. The following table summarizes key differences between the *specialized* and the *generalized* images.
-This article covers steps required to deploy from a specialized image. To deploy from a generalized image, see [Use generalized Windows VHD](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md) for your device.
-
-## VM image workflow
+## Workflow
The high-level workflow to deploy a VM from a specialized image is:
The high-level workflow to deploy a VM from a specialized image is:
1. Create a new managed disk from the VHD. 1. Create a new virtual machine from the managed disk and attach the managed disk. - ## Prerequisites Before you can deploy a VM on your device via PowerShell, make sure that:
Verify that your client can connect to the local Azure Resource Manager.
``` 2. Provide the username `EdgeArmUser` and the password to connect via Azure Resource Manager. If you do not recall the password, [Reset the password for Azure Resource Manager](azure-stack-edge-gpu-set-azure-resource-manager-password.md) and use this password to sign in.
-
## Deploy VM from specialized image
The following sections contain step-by-step instructions to deploy a VM from a s
Follow these steps to copy VHD to local storage account:
-1. Copy the source VHD to a local blob storage account on your Azure Stack Edge.
+1. Copy the source VHD to a local blob storage account on your Azure Stack Edge.
1. Take note of the resulting URI. You'll use this URI in a later step.
-
+ To create and access a local storage account, see the sections [Create a storage account](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#create-a-storage-account) through [Upload a VHD](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#upload-a-vhd) in the article: [Deploy VMs on your Azure Stack Edge device via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md). ## Create a managed disk from VHD
This article used only one resource group to create all the VM resource. Deletin
## Next steps
-Depending on the nature of deployment, you can choose one of the following procedures.
--- [Deploy a VM from a generalized image via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md) -- [Deploy a VM via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)
+- [Prepare a generalized image from a Windows VHD to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md)
+- [Prepare a generalized image from an ISO to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-generalized-image-iso.md)
+d
databox-online Azure Stack Edge Gpu Prepare Windows Generalized Image Iso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-prepare-windows-generalized-image-iso.md
+
+ Title: Prepare generalized image from ISO to deploy VMs on Azure Stack Edge Pro GPU
+description: Describes how to create a generalized Windows VM image starting from an ISO. Use this generalized image to deploy virtual machines on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 04/15/2021+
+#Customer intent: As an IT admin, I need to be able to quickly deploy new Windows virtual machines on my Azure Stack Edge Pro GPU device, and I want to use an ISO image for OS installation.
++
+# Prepare generalized image from ISO to deploy VMs on Azure Stack Edge Pro GPU
++
+To deploy VMs on your Azure Stack Edge Pro GPU device, you need to be able to create custom virtual machine (VM) images that you can use to create VMs. This article describes how to prepare a Windows VM image using ISO installation media, and then generalize that image so you can use it to deploy multiple new VMs on your Azure Stack Edge Pro GPU device.
+
+To prepare a generalized image created from a Windows VHD or VHDX, see [Prepare a generalized image from a Windows VHD to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md).
+
+## About VM images
+
+A Windows VHD or VHDX can be used to create a *specialized* image or a *generalized* image. The following table summarizes key differences between the *specialized* and the *generalized* images.
++
+## Workflow
+
+The high-level workflow to create a generalized Windows VHD using an ISO is:
+
+1. Prepare the source VM using an ISO image:
+ 1. Create a new, blank, fixed-size VHD in Hyper-V Manager.
+ 1. Use that VHD to create a new virtual machine.
+ 1. Mount your ISO image on the DVD drive of the new VM.
+1. Start the VM, and install the Windows operating system.
+1. Generalize the VHD using the *sysprep* utility.
+1. Copy the generalized image to Azure Blob storage.
+
+## Prerequisites
+
+Before you can create a generalized Windows VHD by using an ISO image, make sure that:
+
+- You have an ISO image for the supported Windows version that you want to turn into a generalized VHD. Windows ISO images can be downloaded from the [Microsoft Evaluation Center](https://www.microsoft.com/en-us/evalcenter/).
+
+- You have access to a Windows client with Hyper-V Manager installed.
+
+- You have access to an Azure blob storage account to store your VHD after it is prepared.
+
+## Prepare source VM using an ISO
+
+When you use an ISO image to install the operating system on your VM image, you start by creating a blank, fixed-size VHD in Hyper-V Manager. You then use that VHD to create a virtual machine. Then you attach the ISO image to the VM.
+
+#### Create new VHD in Hyper-V Manager
+
+Your first step is to create a new Generation 1 VHD in Hyper-V Manager, which will be the source VHD for a new virtual machine.
+
+To create the VHD, follow these steps:
+
+1. Open Hyper-V Manager on your client system. On the **Action** menu, select **New** and then **Hard Disk**.
+
+ ![Select New and then Hard Disk](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-01.png)
+
+1. Under **Choose Disk Format**, select **VHD**. Then select **Next >**.
+
+ ![Choose VHD as the disk format](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-02.png)
+
+2. Under **Choose Disk Type**, select **Fixed size**. Then select **Next >**.
+
+ ![Choose Fixed size as the disk type](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-03.png)
+
+3. Under **Specify Name and Location**, enter a name and location for your new VHD. Then select **Next >**.
+
+ ![Enter the name and location for the VHD](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-04.png)
+
+4. Under **Configure Disk**, select **Create a new blank virtual hard disk**, and enter the size of disk you would like to create (generally 20 GB and above for Windows Server). Then select **Next >**.
+
+ ![Settings for creating a new blank virtual hard disk and specifying the size](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-05.png)
+
+5. Under **Summary**, review your selections, and select **Finish** to create the new VHD. The process will take five or more minutes depending on the size of the VHD created.
+
+ ![Summary of VHD settings](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-06.png)
+
+#### Create Hyper-V VM from VHD
+
+Now you'll use the VHD you just created to create a new virtual machine.
+
+To create your new virtual machine, follow these steps:
+
+1. Open Hyper-V Manager on your Windows client.
+
+2. On the **Actions** pane, select **New** and then **Virtual Machine**.
+
+ ![Select New and then Virtual Machine from the menu on the right.](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-07.png)
+
+3. In the New Virtual Machine Wizard, specify the name and location of your VM.
+
+ ![New Virtual Machine wizard, Specify Name and Location](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-08.png)
+
+4. Under **Specify Generation**, select **Generation 1**. Then select **Next >**.
+
+ ![New Virtual Machine wizard, Choose the generation of virtual machine to create](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-09.png)
+
+5. Under **Assign Memory**, assign the desired memory to the virtual machine. Then select **Next >**.
+
+ ![New Virtual Machine wizard, Assign Memory](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-10.png)
+
+6. Under **Configure Networking**, enter your network configuration. Then select **Next >**.
+
+ ![New Virtual Machine wizard, Configure Networking](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-11.png)
+
+7. Under **Connect Virtual Hard Disk**, select **Use an existing virtual hard disk** and browse to the fixed VHD you created in the previous procedure. Then select **Next >**.
+
+ ![New Virtual Machine wizard, Select an existing virtual hard disk as the source](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-12.png)
+
+8. Review the summary, and select **Finish** to create the virtual machine.
+
+#### Mount ISO image on DVD drive of VM
+
+After creating the new virtual machine, follow these steps to mount your ISO image on the DVD drive of the virtual machine:
+
+1. In Hyper-V Manager, select the VM you just created, and then select **Settings**.
+
+ ![In Hyper-V Manager, open Settings for your virtual machine](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-13.png)
+
+2. Under **BIOS**, ensure that **CD** is at the top of the **Startup order** list.
+
+ ![In BIOS settings, the first item under Startup order should be CD](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-14.png)
+
+3. Under **DVD Drive**, select **Image file**, and browse to your ISO image.
+
+ ![In DVD drive settings, select the image file for your VHD](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-15.png)
+
+4. Select **OK** to save your VM settings.
+
+## Start VM, and complete OS installation
+
+To finish building your virtual machine, you need to start the virtual machine and walk through the operating system installation.
++
+## Generalize the VHD
++
+Your VHD can now be used to create a generalized image to use on Azure Stack Edge Pro GPU.
+
+## Upload generalized VHD to Azure Blob storage
++
+## Next steps
+
+- [Deploy a VM from a generalized image via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)
+- [Prepare a generalized image from a Windows VHD to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md)
+- [Prepare a specialized image and deploy VMs using the image](azure-stack-edge-gpu-deploy-vm-specialized-image-powershell.md)
databox-online Azure Stack Edge Gpu Prepare Windows Vhd Generalized Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md
Title: Create VM images from generalized image of Windows VHD for your Azure Stack Edge Pro GPU device
-description: Describes how to VM images from generalized images starting from a Windows VHD or a VHDX. Use this generalized image to create VM images to use with VMs deployed on your Azure Stack Edge Pro GPU device.
+ Title: Prepare generalized image from Windows VHD to deploy VMs on Azure Stack Edge Pro GPU
+description: Describes how to create a generalized VM image starting from a Windows VHD or VHDX. Use this generalized VM image to deploy virtual machines on your Azure Stack Edge Pro GPU device.
Previously updated : 03/18/2021 Last updated : 04/15/2021
-#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
+#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
-# Use generalized image from Windows VHD to create a VM image for your Azure Stack Edge Pro device
+# Prepare generalized image from Windows VHD to deploy VMs on Azure Stack Edge Pro GPU
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-To deploy VMs on your Azure Stack Edge Pro device, you need to be able to create custom VM images that you can use to create VMs. This article describes the steps required to prepare a Windows VHD or VHDX to create a generalized image. This generalized image is then used to create a VM image for your Azure Stack Edge Pro device.
+To deploy VMs on your Azure Stack Edge Pro GPU device, you need to be able to create custom VM images that you can use to create VMs. This article describes how to prepare a generalized image from a Windows VHD or VHDX, which you can use to deploy virtual machines on Windows Stack Edge Pro GPU devices.
-## About preparing Windows VHD
+To prepare a generalized VM image using an ISO, see [Prepare a generalized image from an ISO to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-generalized-image-iso.md).
-A Windows VHD or VHDX can be used to create a *generalized* image or a *specialized* image. The following table summarizes key differences between the *generalized* and the *specialized* images.
+## About VM images
+A Windows VHD or VHDX can be used to create a *specialized* image or a *generalized* image. The following table summarizes key differences between the *specialized* and the *generalized* images.
-|Image type |Generalized |Specialized |
-||||
-|Target |Deployed on any system | Targeted to a specific system |
-|Setup after boot | Setup required at first boot of the VM. | Setup not needed. <br> Platform turns the VM on. |
-|Configuration |Hostname, admin-user, and other VM-specific settings required. |Completely pre-configured. |
-|Used when |Creating multiple new VMs from the same image. |Migrating a specific machine or restoring a VM from previous backup. |
+## Workflow
-This article covers steps required to deploy from a generalized image. To deploy from a specialized image, see [Use specialized Windows VHD](azure-stack-edge-placeholder.md) for your device.
+The high-level workflow to prepare a Windows VHD to use as a generalized image, starting from the VHD or VHDX of an existing virtual machine, has the following steps:
-> [!IMPORTANT]
-> This procedure does not cover cases where the source VHD is configured with custom configurations and settings. For example, additional actions may be required to generalize a VHD containing custom firewall rules or proxy settings. For more information on these additional actions, see [Prepare a Windows VHD to upload to Azure - Azure Virtual Machines](../virtual-machines/windows/prepare-for-upload-vhd-image.md).
--
-## VM image workflow
-
-The high-level workflow to prepare a Windows VHD for use as a generalized image has the following steps:
-
-1. Convert the source VHD or VHDX to a fixed size VHD.
-1. Create a VM in Hyper-V using the fixed VHD.
-1. Connect to the Hyper-V VM.
-1. Generalize the VHD using the *sysprep* utility.
+1. Prepare the source VM from a Windows VHD:
+ 1. Convert the source VHD or VHDX to a fixed-size VHD.
+ 1. Use that VHD to create a new virtual machine.<!--Can this procedure be generalized and moved to an include file?-->
+1. Start the VM, and install the Windows operating system.
+1. Generalize the VHD using the *sysprep* utility.
1. Copy the generalized image to Blob storage.
-1. Use generalized image to deploy VMs on your device. For more information, see how to [deploy a VM via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md) or [deploy a VM via PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
- ## Prerequisites
-Before you prepare a Windows VHD for use as a generalized image on Azure Stack Edge, make sure that:
+Before you prepare a Windows VHD for use as a generalized image on an Azure Stack Edge Pro GPU device, make sure that:
-- You have a VHD or a VHDX containing a supported version of Windows. See [Supported guest operating Systems]() for your Azure Stack Edge Pro.
+- You have a VHD or a VHDX containing a supported version of Windows.
- You have access to a Windows client with Hyper-V Manager installed. - You have access to an Azure Blob storage account to store your VHD after it is prepared.
-## Prepare a generalized Windows image from VHD
+## Prepare source VM from Windows VHD
-## Convert to a fixed VHD
+When your VM source is a Windows VHD or VHDX, you first need to convert the Windows VHD to a fixed-size VHD. You will use the fixed-size VHD to create a new virtual machine.
-For your device, you'll need fixed-size VHDs to create VM images. You'll need to convert your source Windows VHD or VHDX to a fixed VHD. Follow these steps:
+> [!IMPORTANT]
+> These procedures do not cover cases where the source VHD is configured with custom configurations and settings. For example, additional actions may be required to generalize a VHD containing custom firewall rules or proxy settings. For more information on these additional actions, see [Prepare a Windows VHD to upload to Azure - Azure Virtual Machines](../virtual-machines/windows/prepare-for-upload-vhd-image.md).
+
+#### Convert source VHD to a fixed-size VHD
+
+For your device, you'll need fixed-size VHDs to create VM images. You'll need to convert your source Windows VHD or VHDX to a fixed VHD.
+
+Follow these steps:
1. Open Hyper-V Manager on your client system. Go to **Edit Disk**.
For your device, you'll need fixed-size VHDs to create VM images. You'll need to
![Choose disk format page](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/convert-fixed-vhd-4.png) - 1. On the **Choose disk type** page, choose **Fixed size** and select **Next>**. ![Choose disk type page](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/convert-fixed-vhd-5.png) - 1. On the **Configure disk** page, browse to the location and specify a name for the fixed size VHD disk. Select **Next>**. ![Configure disk page](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/convert-fixed-vhd-6.png) -
-1. Review the summary and select **Finish**. The VHD or VHDX conversion takes a few minutes. The time for conversion depends on the size of the source disk.
+1. Review the summary and select **Finish**. The VHD or VHDX conversion takes a few minutes. The time for conversion depends on the size of the source disk.
<!-- 1. Run PowerShell on your Windows client.
For your device, you'll need fixed-size VHDs to create VM images. You'll need to
Convert-VHD -Path <source VHD path> -DestinationPath <destination-path.vhd> -VHDType Fixed ``` -->
-You'll use this fixed VHD for all the subsequent steps in this article.
-
+You'll use this fixed-size VHD for all the subsequent steps in this article.
-## Create a Hyper-V VM from fixed VHD
+#### Create Hyper-V VM from the fixed-size VHD
1. In **Hyper-V Manager**, in the scope pane, right-click your system node to open the context menu, and then select **New** > **Virtual Machine**.
You'll use this fixed VHD for all the subsequent steps in this article.
1. Review the **Summary** and then select **Finish** to create the virtual machine.
-The virtual machine takes several minutes to create.
-
-
-## Connect to the Hyper-V VM
-
-The VM shows in the list of the virtual machines on your client system.
--
-1. Select the VM and then right-click and select **Start**.
+Creation of the virtual machine takes several minutes.
- ![Select VM and start it](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/connect-virtual-machine-2.png)
+The VM shows in the list of the virtual machines on your client system.
-2. The VM shows show as **Running**. Select the VM and then right-click and select **Connect**.
+## Start VM, and install operating system
- ![Connect to VM](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/connect-virtual-machine-4.png)
+To finish building your virtual machine, you need to start the virtual machine and walk through the operating system installation.
-After you are connected to the VM, complete the Machine setup wizard and then sign into the VM.
+After you're connected to the VM, complete the Machine setup wizard, and then sign into the VM.<!--It's not clear what they are doing here. Where does the Machine setup wizard come in?-->
## Generalize the VHD
-Use the *sysprep* utility to generalize the VHD.
-1. Inside the VM, open a command prompt.
-1. Run the following command to generalize the VHD.
+Your VHD can now be used to create a generalized image to use on Azure Stack Edge Pro GPU.
- ```
- c:\windows\system32\sysprep\sysprep.exe /oobe /generalize /shutdown /mode:vm
- ```
- For details, see [Sysprep (system preparation) overview](/windows-hardware/manufacture/desktop/sysprep--system-preparation--overview).
-1. After the command is complete, the VM will shut down. **Do not restart the VM**.
-
-## Upload the VHD to Azure Blob storage
+## Upload generalized VHD to Azure Blob storage
-Your VHD can now be used to create a generalized image on Azure Stack Edge.
-
-1. Upload the VHD to Azure blob storage. See the detailed instructions in [Upload a VHD using Azure Storage Explorer](../devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md).
-1. After the upload is complete, you can use the uploaded image to create VM images and VMs.
<!-- this should be added to deploy VM articles - If you experience any issues creating VMs from your new image, you can use VM console access to help troubleshoot. For information on console access, see [link].--> -- ## Next steps Depending on the nature of deployment, you can choose one of the following procedures. - [Deploy a VM from a generalized image via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)-- [Deploy a VM from a generalized image via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md)
+- [Prepare a generalized image from an ISO to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-generalized-image-iso.md)
+- [Prepare a specialized image and deploy VMs using the image](azure-stack-edge-gpu-deploy-vm-specialized-image-powershell.md)
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-models.md
DTDL is based on JSON-LD and is programming-language independent. DTDL is not ex
The rest of this article summarizes how the language is used in Azure Digital Twins.
-> [!NOTE]
-> Not all services that use DTDL implement the exact same features of DTDL. For example, IoT Plug and Play does not use the DTDL features that are for graphs, while Azure Digital Twins does not currently implement DTDL commands.
->
-> For more information on the DTDL features that are specific to Azure Digital Twins, see the section later in this article on [Azure Digital Twins DTDL implementation specifics](#azure-digital-twins-dtdl-implementation-specifics).
+### Azure Digital Twins DTDL implementation specifics
+
+Not all services that use DTDL implement the exact same features of DTDL. For example, IoT Plug and Play does not use the DTDL features that are for graphs, while Azure Digital Twins does not currently implement DTDL commands.
+
+For a DTDL model to be compatible with Azure Digital Twins, it must meet these requirements:
+
+* All top-level DTDL elements in a model must be of type *interface*. This is because Azure Digital Twins model APIs can receive JSON objects that represent either an interface or an array of interfaces. As a result, no other DTDL element types are allowed at the top level.
+* DTDL for Azure Digital Twins must not define any *commands*.
+* Azure Digital Twins only allows a single level of component nesting. This means that an interface that's being used as a component can't have any components itself.
+* Interfaces can't be defined inline within other DTDL interfaces; they must be defined as separate top-level entities with their own IDs. Then, when another interface wants to include that interface as a component or through inheritance, it can reference its ID.
+
+Azure Digital Twins also does not observe the `writable` attribute on properties or relationships. Although this can be set as per DTDL specifications, the value isn't used by Azure Digital Twins. Instead, these are always treated as writable by external clients that have general write permissions to the Azure Digital Twins service.
## Elements of a model
A DTDL model interface may contain zero, one, or many of each of the following f
>[!TIP] >Components can also be used for organization, to group sets of related properties within a model interface. In this situation, you can think of each component as a namespace or "folder" inside the interface.
-* **Relationship** - Relationships let you represent how a digital twin can be involved with other digital twins. Relationships can represent different semantic meanings, such as *contains* ("floor contains room"), *cools* ("hvac cools room"), *isBilledTo* ("compressor is billed to user"), etc. Relationships allow the solution to provide a graph of interrelated entities.
+* **Relationship** - Relationships let you represent how a digital twin can be involved with other digital twins. Relationships can represent different semantic meanings, such as *contains* ("floor contains room"), *cools* ("hvac cools room"), *isBilledTo* ("compressor is billed to user"), etc. Relationships allow the solution to provide a graph of interrelated entities. Relationships can also have [properties](#properties-of-relationships) of their own.
> [!NOTE] > The [spec for DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) also defines **Commands**, which are methods that can be executed on a digital twin (like a reset command, or a command to switch a fan on or off). However, *commands are not currently supported in Azure Digital Twins.*
Telemetry and properties often work together to handle data ingress from devices
You can also publish a telemetry event from the Azure Digital Twins API. As with other telemetry, that is a short-lived event that requires a listener to handle.
-### Azure Digital Twins DTDL implementation specifics
+#### Properties of relationships
-For a DTDL model to be compatible with Azure Digital Twins, it must meet these requirements.
+DTDL also allows for **relationships** to have properties of their own. When defining a relationship within a DTDL model, the relationship can have its own `properties` field where you can define custom properties to describe relationship-specific state.
-* All top-level DTDL elements in a model must be of type *interface*. This is because Azure Digital Twins model APIs can receive JSON objects that represent either an interface or an array of interfaces. As a result, no other DTDL element types are allowed at the top level.
-* DTDL for Azure Digital Twins must not define any *commands*.
-* Azure Digital Twins only allows a single level of component nesting. This means that an interface that's being used as a component can't have any components itself.
-* Interfaces can't be defined inline within other DTDL interfaces; they must be defined as separate top-level entities with their own IDs. Then, when another interface wants to include that interface as a component or through inheritance, it can reference its ID.
+## Model inheritance
-Azure Digital Twins also does not observe the `writable` attribute on properties or relationships. Although this can be set as per DTDL specifications, the value isn't used by Azure Digital Twins. Instead, these are always treated as writable by external clients that have general write permissions to the Azure Digital Twins service.
+Sometimes, you may want to specialize a model further. For example, it might be useful to have a generic model *Room*, and specialized variants *ConferenceRoom* and *Gym*. To express specialization, DTDL supports inheritance: interfaces can inherit from one or more other interfaces.
-## Example model code
+The following example re-imagines the *Planet* model from the earlier DTDL example as a subtype of a larger *CelestialBody* model. The "parent" model is defined first, and then the "child" model builds on it by using the field `extends`.
-Twin type models can be written in any text editor. The DTDL language follows JSON syntax, so you should store models with the extension *.json*. Using the JSON extension will enable many programming text editors to provide basic syntax checking and highlighting for your DTDL documents. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/).
-This section contains an example of a typical model, written as a DTDL interface. The model describes **planets**, each with a name, a mass, and a temperature.
-
-Consider that planets may also interact with **moons** that are their satellites, and may contain **craters**. In the example below, the `Planet` model expresses connections to these other entities by referencing two external modelsΓÇö`Moon` and `Crater`. These models are also defined in the example code below, but are kept very simple so as not to detract from the primary `Planet` example.
+In this example, *CelestialBody* contributes a name, a mass, and a temperature to *Planet*. The `extends` section is an interface name, or an array of interface names (allowing the extending interface to inherit from multiple parent models if desired).
+Once inheritance is applied, the extending interface exposes all properties from the entire inheritance chain.
-The fields of the model are:
+The extending interface cannot change any of the definitions of the parent interfaces; it can only add to them. It also cannot redefine a capability already defined in any of its parent interfaces (even if the capabilities are defined to be the same). For example, if a parent interface defines a `double` property *mass*, the extending interface cannot contain a declaration of *mass*, even if it's also a `double`.
-| Field | Description |
-| | |
-| `@id` | An identifier for the model. Must be in the format `dtmi:<domain>:<unique model identifier>;<model version number>`. |
-| `@type` | Identifies the kind of information being described. For an interface, the type is *Interface*. |
-| `@context` | Sets the [context](https://niem.github.io/json/reference/json-ld/context/) for the JSON document. Models should use `dtmi:dtdl:context;2`. |
-| `displayName` | [optional] Allows you to give the model a friendly name if desired. |
-| `contents` | All remaining interface data is placed here, as an array of attribute definitions. Each attribute must provide a `@type` (*Property*, *Telemetry*, *Command*, *Relationship*, or *Component*) to identify the sort of interface information it describes, and then a set of properties that define the actual attribute (for example, `name` and `schema` to define a *Property*). |
+## Model code
-> [!NOTE]
-> Note that the component interface (*Crater* in this example) is defined in the same array as the interface that uses it (*Planet*). Components must be defined this way in API calls in order for the interface to be found.
+Twin type models can be written in any text editor. The DTDL language follows JSON syntax, so you should store models with the extension *.json*. Using the JSON extension will enable many programming text editors to provide basic syntax checking and highlighting for your DTDL documents. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/).
### Possible schemas
In addition to primitive types, *Property* and *Telemetry* fields can have these
*Telemetry* fields also support `Array`.
-### Model inheritance
+### Example model
-Sometimes, you may want to specialize a model further. For example, it might be useful to have a generic model *Room*, and specialized variants *ConferenceRoom* and *Gym*. To express specialization, DTDL supports inheritance: interfaces can inherit from one or more other interfaces.
-
-The following example re-imagines the *Planet* model from the earlier DTDL example as a subtype of a larger *CelestialBody* model. The "parent" model is defined first, and then the "child" model builds on it by using the field `extends`.
+This section contains an example of a typical model, written as a DTDL interface. The model describes **planets**, each with a name, a mass, and a temperature.
+
+Consider that planets may also interact with **moons** that are their satellites, and may contain **craters**. In the example below, the `Planet` model expresses connections to these other entities by referencing two external modelsΓÇö`Moon` and `Crater`. These models are also defined in the example code below, but are kept very simple so as not to detract from the primary `Planet` example.
-In this example, *CelestialBody* contributes a name, a mass, and a temperature to *Planet*. The `extends` section is an interface name, or an array of interface names (allowing the extending interface to inherit from multiple parent models if desired).
+The fields of the model are:
-Once inheritance is applied, the extending interface exposes all properties from the entire inheritance chain.
+| Field | Description |
+| | |
+| `@id` | An identifier for the model. Must be in the format `dtmi:<domain>:<unique model identifier>;<model version number>`. |
+| `@type` | Identifies the kind of information being described. For an interface, the type is *Interface*. |
+| `@context` | Sets the [context](https://niem.github.io/json/reference/json-ld/context/) for the JSON document. Models should use `dtmi:dtdl:context;2`. |
+| `displayName` | [optional] Allows you to give the model a friendly name if desired. |
+| `contents` | All remaining interface data is placed here, as an array of attribute definitions. Each attribute must provide a `@type` (*Property*, *Telemetry*, *Command*, *Relationship*, or *Component*) to identify the sort of interface information it describes, and then a set of properties that define the actual attribute (for example, `name` and `schema` to define a *Property*). |
-The extending interface cannot change any of the definitions of the parent interfaces; it can only add to them. It also cannot redefine a capability already defined in any of its parent interfaces (even if the capabilities are defined to be the same). For example, if a parent interface defines a `double` property *mass*, the extending interface cannot contain a declaration of *mass*, even if it's also a `double`.
+> [!NOTE]
+> Note that the component interface (*Crater* in this example) is defined in the same array as the interface that uses it (*Planet*). Components must be defined this way in API calls in order for the interface to be found.
## Best practices for designing models
digital-twins How To Authenticate Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-authenticate-client.md
Also, to use authentication in a function, remember to:
* Use [environment variables](/sandbox/functions-recipes/environment-variables?tabs=csharp) as appropriate * Assign permissions to the functions app that enable it to access the Digital Twins APIs. For more information on Azure Functions processes, see [*How-to: Set up an Azure function for processing data*](how-to-create-azure-function.md).
+## Authenticate across tenants
+
+Azure Digital Twins is a service that only supports one [Azure Active Directory (Azure AD) tenant](../active-directory/develop/quickstart-create-new-tenant.md): the main tenant from the subscription where the Azure Digital Twins instance is located.
++
+If you need to access your Azure Digital Twins instance using a service principal or user account that belongs to a different tenant from the instance, you can have each federated identity from another tenant request a **token** from the Azure Digital Twins instance's "home" tenant.
++
+You can also specify the home tenant in the credential options in your code.
++ ## Other credential methods If the highlighted authentication scenarios above do not cover the needs of your app, you can explore other types of authentication offered in the [**Microsoft identity platform**](../active-directory/develop/v2-overview.md#getting-started). The documentation for this platform covers additional authentication scenarios, organized by application type.
digital-twins How To Use Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-apis-sdks.md
The following list provides additional detail and general guidelines for using t
* You can use an HTTP REST-testing tool like Postman to make direct calls to the Azure Digital Twins APIs. For more information about this process, see [*How-to: Make requests with Postman*](how-to-use-postman.md). * To use the SDK, instantiate the `DigitalTwinsClient` class. The constructor requires credentials that can be obtained with a variety of authentication methods in the `Azure.Identity` package. For more on `Azure.Identity`, see its [namespace documentation](/dotnet/api/azure.identity). * You may find the `InteractiveBrowserCredential` useful while getting started, but there are several other options, including credentials for [managed identity](/dotnet/api/azure.identity.interactivebrowsercredential), which you will likely use to authenticate [Azure functions set up with MSI](../app-service/overview-managed-identity.md?tabs=dotnet) against Azure Digital Twins. For more about `InteractiveBrowserCredential`, see its [class documentation](/dotnet/api/azure.identity.interactivebrowsercredential).
-* Requests to the Azure Digital Twins APIs require a User or Service Principal that is a part of the same [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) tenant where the Azure Digital Twins instance resides. To prevent bad actors from scanning URLs to discover where Azure Digital Twins instances live, requests with access tokens from outside the originating tenant will be returned a "404 Sub-Domain not found" error message. This error will be returned *even if* the User or Service Principal was given an Azure Digital Twins Data Owner or Azure Digital Twins Data Reader role through [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) collaboration.
+* Requests to the Azure Digital Twins APIs require a user or service principal that is a part of the same [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) tenant where the Azure Digital Twins instance resides. To prevent malicious scanning of Azure Digital Twins endpoints, requests with access tokens from outside the originating tenant will be returned a "404 Sub-Domain not found" error message. This error will be returned *even if* the user or service principal was given an Azure Digital Twins Data Owner or Azure Digital Twins Data Reader role through [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) collaboration. For information on how to achieve access across multiple tenants, see [*How-to: Write app authentication code*](how-to-authenticate-client.md#authenticate-across-tenants).
* All service API calls are exposed as member functions on the `DigitalTwinsClient` class. * All service functions exist in synchronous and asynchronous versions. * All service functions throw an exception for any return status of 400 or above. Make sure you wrap calls into a `try` section, and catch at least `RequestFailedExceptions`. For more about this type of exception, see [here](/dotnet/api/azure.requestfailedexception).
digital-twins How To Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-postman.md
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
```
+ >[!NOTE]
+ > If you need to access your Azure Digital Twins instance using a service principal or user account that belongs to a different Azure Active Directory tenant from the instance, you'll need to request a **token** from the Azure Digital Twins instance's "home" tenant. For more information on this process, see [*How-to: Write app authentication code*](how-to-authenticate-client.md#authenticate-across-tenants).
3. Copy the value of `accessToken` in the result, and save it to use in the next section. This is your **token value** that you will provide to Postman to authorize your requests.
digital-twins Troubleshoot Error 404 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-error-404.md
+
+ Title: "Azure Digital Twins request failed with Status: 404 Sub-Domain not found"
+description: "Causes and resolutions for 'Service request failed. Status: 404 Sub-Domain not found' on Azure Digital Twins."
++++ Last updated : 4/13/2021++
+# Service request failed. Status: 404 Sub-Domain not found
+
+This article describes causes and resolution steps for receiving a 404 error from service requests to Azure Digital Twins.
+
+## Symptoms
+
+This error may occur when accessing an Azure Digital Twins instance using a service principal or user account that belongs to a different [Azure Active Directory (Azure AD) tenant](../active-directory/develop/quickstart-create-new-tenant.md) from the instance. The correct [roles](concepts-security.md) seem to be assigned to the identity, but API requests fail with an error status of `404 Sub-Domain not found`.
+
+## Causes
+
+### Cause #1
+
+Azure Digital Twins requires that all authenticating users belong to the same Azure AD tenant as the Azure Digital Twins instance.
++
+## Solutions
+
+### Solution #1
+
+You can resolve this issue by having each federated identity from another tenant request a **token** from the Azure Digital Twins instance's "home" tenant.
++
+### Solution #2
+
+If you're using the `DefaultAzureCredential` class in your code and you continue encountering this issue after getting a token, you can specify the home tenant in the `DefaultAzureCredential` options to clarify the tenant even when authentication defaults down to another type.
++
+## Next steps
+
+Read more about security and permissions on Azure Digital Twins:
+* [*Concepts: Security for Azure Digital Twins solutions*](concepts-security.md)
dms Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/faq.md
Azure Database Migration Service is a fully managed service designed to enable s
* Continued investment in friction-free migrations. **Q. What source/target pairs does Azure Database Migration Service currently support?**
-The service currently supports a variety of source/target pairs, or migration scenarios. For a complete listing of the status of each available migration scenario, see the article [Status of migration scenarios supported by the Azure Database Migration Service](./resource-scenario-status.md).
-
-Other migration scenarios are in preview and require submitting a nomination via the DMS Preview site. For a complete listing of the scenarios in preview and to sign up to participate in one of these offerings, see the [DMS Preview site](https://aka.ms/dms-preview/).
+The service currently supports a variety of source/target pairs, or migration scenarios. For a complete listing of the status of each available migration scenario, see the article [Status of migration scenarios supported by the Azure Database Migration Service](https://github.com/MicrosoftDocs/azure-docs-pr/pull/resource-scenario-status.md).
**Q. What versions of SQL Server does Azure Database Migration Service support as a source?** When migrating from SQL Server, supported sources for Azure Database Migration Service are SQL Server 2005 through SQL Server 2019.
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/resource-scenario-status.md
Azure Database Migration Service is designed to support different migration scen
With Azure Database Migration Service, you can do an offline or an online migration. With *offline* migrations, application downtime begins at the same time that the migration starts. To limit downtime to the time required to cut over to the new environment when the migration completes, use an *online* migration. It's recommended to test an offline migration to determine whether the downtime is acceptable; if not, do an online migration.
-## Migration scenario status
-
-The status of migration scenarios supported by Azure Database Migration Service varies with time. Generally, scenarios are first released in **private preview**. Participating in private preview requires customers to submit a nomination via the [DMS Preview site](https://aka.ms/dms-preview). After private preview, the scenario status changes to **public preview**. Azure Database Migration Service users can try out migration scenarios in public preview directly from the user interface. No sign-up is required. However, migration scenarios in public preview may not be available in all regions and may undergo additional changes before final release. After public preview, the scenario status changes to **generally availability**. General availability (GA) is the final release status, and the functionality is complete and accessible to all users.
- ## Migration scenario support The following tables show which migration scenarios are supported when using Azure Database Migration Service.
event-grid Secure Webhook Delivery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/secure-webhook-delivery.md
$eventGridSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventGridApp
if ($eventGridSP -match "Microsoft.EventGrid") { Write-Host "The Service principal is already defined.`n"
-}
-else
-{
+} else {
# Create a service principal for the "Azure Event Grid" AAD Application and add it to the role Write-Host "Creating the Azure Event Grid service principal" $eventGridSP = New-AzureADServicePrincipal -AppId $eventGridAppId
Write-Host $myAppRoles
if ($myAppRoles -match $eventGridRoleName) { Write-Host "The Azure Event Grid role is already defined.`n"
-}
-else
-{
+} else {
# Add our new role to the Azure AD Application Write-Host "Creating the Azure Event Grid role in Azure Ad Application: " $myWebhookAadApplicationObjectId $newRole = CreateAppRole -Name $eventGridRoleName -Description "Azure Event Grid Role"
Run the New-AzureADServiceAppRoleAssignment command to assign Event Grid service
```powershell $eventGridAppRole = $myApp.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName
-New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $myServicePrincipal.ObjectId -ObjectId -PrincipalId $eventGridSP.ObjectId
+New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $myServicePrincipal.ObjectId -ObjectId $eventGridSP.ObjectId -PrincipalId $eventGridSP.ObjectId
``` Run the following commands to output information that you'll use later.
event-hubs Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-baseline.md
Configure soft delete for the Azure storage account that's used for capturing Ev
- [Set up a key vault with keys](configure-customer-managed-key.md) -- [Soft delete for Azure Storage blobs](//azure/storage/blobs/storage-blob-soft-delete?tabs=azure-portal)
+- [Soft delete for Azure Storage blobs](/azure/storage/blobs/soft-delete-blob-overview)
**Responsibility**: Customer
frontdoor Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/resource-manager-template-samples.md
Previously updated : 03/24/2021 Last updated : 04/16/2021 # Azure Resource Manager templates for Azure Front Door
The following table includes links to Azure Resource Manager templates for Azure
| Sample | Description | |-|-|
+| [Front Door (quick create)](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium/) | Creates a basic Front Door profile including an endpoint, origin group, origin, and route. |
| [Rule set](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-rule-set/) | Creates a Front Door profile and rule set. |
+| [WAF policy with managed rule set](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-waf-managed/) | Creates a Front Door profile and WAF with managed rule set. |
+| [WAF policy with custom rule](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-waf-custom/) | Creates a Front Door profile and WAF with custom rule. |
|**App Service origins**| **Description** | | [App Service](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-app-service-public) | Creates an App Service app with a public endpoint, and a Front Door profile. | | [App Service with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-app-service-private-link) | Creates an App Service app with a private endpoint, and a Front Door profile. |
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-release-notes.md
You need to drop and recreate your clusters if youΓÇÖd like to move existing clu
ItΓÇÖs highly recommended that you test your script actions and custom applications deployed on edge nodes on an Ubuntu 18.04 virtual machine (VM) in advance. You can [create a simple Ubuntu Linux VM on 18.04-LTS](https://azure.microsoft.com/resources/templates/101-vm-simple-linux/), then create and use a [secure shell (SSH) key pair](https://docs.microsoft.com/azure/virtual-machines/linux/mac-create-ssh-keys#ssh-into-your-vm) on your VM to run and test your script actions and custom applications deployed on edge nodes.
+### Disable Stardard_A5 VM size as Head Node for HDInsgiht 4.0
+HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from the next release in May 2021, customers will not be able to create new clusters with Standard_A5 VM size as Head Node. You can use other 2-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A 4-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
+ ### Basic support for HDInsight 3.6 starting July 1, 2021 Starting July 1, 2021, Microsoft will offer [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You'll automatically be enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
iot-central Howto Monitor Application Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-monitor-application-health.md
Metrics may differ from the numbers shown on your Azure IoT Central invoice. Thi
- IoT Central [standard pricing plans](https://azure.microsoft.com/pricing/details/iot-central/) include two devices and varying message quotas for free. While the free items are excluded from billing, they're still counted in the metrics. -- IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. Solution builders may choose to [validate their device templates](./overview-iot-central.md#create-device-templates) before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.
+- IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. Solution builders may choose to [validate their device templates](./overview-iot-central.md#connect-devices) before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.
- While metrics may show a subset of device-to-cloud communication, all communication between the device and the cloud [counts as a message for billing](https://azure.microsoft.com/pricing/details/iot-central/).
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-tour.md
This article introduces you to the Microsoft Azure IoT Central UI. You can use the UI to create, manage, and use an Azure IoT Central solution and its connected devices.
-As a _solution builder_, you use the Azure IoT Central UI to define your Azure IoT Central solution. You can use the UI to:
-
-* Define the types of device that connect to your solution.
-* Configure the rules and actions for your devices.
-* Customize the UI for an _operator_ who uses your solution.
-
-As an _operator_, you use the Azure IoT Central UI to manage your Azure IoT Central solution. You can use the UI to:
-
-* Monitor your devices.
-* Configure your devices.
-* Troubleshoot and remediate issues with your devices.
-* Provision new devices.
- ## IoT Central homepage
-The [IoT Central homepage](https://aka.ms/iotcentral-get-started) page is the place where you can learn more about the latest news and features available on IoT Central, create new applications, and see and launch your existing application.
+The [IoT Central homepage](https://aka.ms/iotcentral-get-started) page is the place to learn more about the latest news and features available on IoT Central, create new applications, and see and launch your existing applications.
:::image type="content" source="media/overview-iot-central-tour/iot-central-homepage.png" alt-text="IoT Central homepage"::: ### Create an application
-In the Build section you can browse the list of industry-relevant IoT Central templates to help you get started quickly, or start from scratch using a Custom app template.
+In the Build section you can browse the list of industry-relevant IoT Central templates, or start from scratch using a Custom app template.
:::image type="content" source="media/overview-iot-central-tour/iot-central-build.png" alt-text="IoT Central build page":::
To learn more, see the [Create an Azure IoT Central application](quick-deploy-io
### Launch your application
-You can launch your IoT Central application by going to the URL that you or your solution builder choose during app creation. You can also see a list of all the applications you have access to in the [IoT Central app manager](https://aka.ms/iotcentral-apps).
+You launch your IoT Central application by navigating to the URL you chose during app creation. You can also see a list of all the applications you have access to in the [IoT Central app manager](https://aka.ms/iotcentral-apps).
:::image type="content" source="media/overview-iot-central-tour/app-manager.png" alt-text="IoT Central app manager":::
Once you're inside your IoT application, use the left pane to access the differe
:::column-end::: :::column span="2":::
- **Dashboard** displays your application dashboard. As a *solution builder*, you can customize the global dashboard for your operators. Depending on their user role, operators can also create their own personal dashboards.
+ **Dashboards** displays all application and personal dashboards.
**Devices** enables you to manage your connected devices - real and simulated.
- **Device groups** lets you view and create logical collections of devices specified by a query. You can save this query and use device groups through the application to perform bulk operations.
+ **Device groups** lets you view and create collections of devices specified by a query. Device groups are used through the application to perform bulk operations.
**Rules** enables you to create and edit rules to monitor your devices. Rules are evaluated based on device telemetry and trigger customizable actions.
- **Analytics** lets you create custom views on top of device data to derive insights from your application.
+ **Analytics** lets you view telemetry from your devices graphically.
**Jobs** enables you to manage your devices at scale by running bulk operations.
Once you're inside your IoT application, use the left pane to access the differe
**Data export** enables you to configure a continuous export to external services - such as storage and queues. **Administration** is where you can manage your application's settings, customization, billing, users, and roles.-
- **IoT Central** lets *administrators* to jump back to IoT Central's app manager.
:::column-end::: :::row-end:::
The top menu appears on every page:
:::image type="content" source="media/overview-iot-central-tour/toolbar.png" alt-text="IoT Central Toolbar":::
-* To search for device templates and devices, enter a **Search** value.
+* To search for devices, enter a **Search** value.
* To change the UI language or theme, choose the **Settings** icon. Learn more about [managing your application preferences](howto-manage-preferences.md) * To get help and support, choose the **Help** drop-down for a list of resources. You can [get information about your application](./howto-get-app-info.md) from the **About your app** link. In an application on the free pricing plan, the support resources include access to [live chat](howto-show-hide-chat.md). * To sign out of the application, choose the **Account** icon.
You can choose between a light theme or a dark theme for the UI:
:::image type="content" source="Media/overview-iot-central-tour/dashboard.png" alt-text="Screenshot of IoT Central Dashboard.":::
-* The dashboard is the first page you see when you sign in to your Azure IoT Central application. As a *solution builder*, you can create and customize multiple global application dashboards for other users. Learn more about [adding tiles to your dashboard](howto-add-tiles-to-your-dashboard.md)
+* The dashboard is the first page you see when you sign in to your Azure IoT Central application. You can create and customize multiple application dashboards. Learn more about [adding tiles to your dashboard](howto-add-tiles-to-your-dashboard.md)
-* As an *operator*, if your user role allows it, you can create personal dashboards to monitor what you care about. To learn more, see the [Create Azure IoT Central personal dashboards](howto-create-personal-dashboards.md) how-to article.
+* Personal dashboards can also be created to monitor what you care about. To learn more, see the [Create Azure IoT Central personal dashboards](howto-create-personal-dashboards.md) how-to article.
### Devices
To learn more, see the [Monitor your devices](./quick-monitor-devices.md) quicks
:::image type="content" source="Media/overview-iot-central-tour/device-groups.png" alt-text="Device Group page":::
-Device group are a collection of related devices. A *solution builder* defines a query to identify the devices that are included in a device group. You use device groups to perform bulk operations in your application. To learn more, see the [Use device groups in your Azure IoT Central application](tutorial-use-device-groups.md) article.
+Device group are a collection of related devices. You use device groups to perform bulk operations in your application. To learn more, see the [Use device groups in your Azure IoT Central application](tutorial-use-device-groups.md) article.
### Rules :::image type="content" source="Media/overview-iot-central-tour/rules.png" alt-text="Screenshot of Rules Page.":::
The rules page lets you define rules based on devices' telemetry, state, or even
:::image type="content" source="Media/overview-iot-central-tour/analytics.png" alt-text="Screenshot of Analytics page.":::
-The analytics lets you create custom views on top of device data to derive insights from your application. To learn more, see the [Create analytics for your Azure IoT Central application](howto-create-analytics.md) article.
+The analytics page lets you view telemetry from your devices graphically, across a time series. To learn more, see the [Create analytics for your Azure IoT Central application](howto-create-analytics.md) article.
### Jobs :::image type="content" source="Media/overview-iot-central-tour/jobs.png" alt-text="Jobs Page":::
-The jobs page lets you run bulk device management operations on your devices. You can update device properties, settings, and execute commands against device groups. To learn more, see the [Run a job](howto-run-a-job.md) article.
+The jobs page lets you run bulk operations on your devices. You can update device properties, settings, and execute commands against device groups. To learn more, see the [Run a job](howto-run-a-job.md) article.
### Device templates :::image type="content" source="Media/overview-iot-central-tour/templates.png" alt-text="Screenshot of Device Templates.":::
-The device templates page is where a builder creates and manages the device templates in the application. A device template specifies devices characteristics such as:
+The device templates page is where you create and manage the device templates in the application. A device template specifies devices characteristics such as:
* Telemetry, state, and event measurements * Properties * Commands * Views
-The *solution builder* can also create forms and dashboards for operators to use to manage devices.
- To learn more, see the [Define a new device type in your Azure IoT Central application](howto-set-up-template.md) tutorial. ### Data export :::image type="content" source="Media/overview-iot-central-tour/export.png" alt-text="Data Export":::
-Data export enables you to set up streams of data, such as telemetry, from the application to external systems. To learn more, see the [Export your data in Azure IoT Central](./howto-export-data.md) article.
+Data export enables you to set up streams of data to external systems. To learn more, see the [Export your data in Azure IoT Central](./howto-export-data.md) article.
### Administration
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central.md
IoT Central is an IoT application platform that reduces the burden and cost of developing, managing, and maintaining enterprise-grade IoT solutions. Choosing to build with IoT Central gives you the opportunity to focus time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.
-The web UI lets you monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications.
+The web UI lets you quickly connect devices, monitor device conditions, create rules, and manage millions of devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications.
This article outlines, for IoT Central:
The IoT Central documentation refers to four user roles that interact with an Io
## Create your IoT Central application
-As a solution builder, you use IoT Central to create a custom, cloud-hosted IoT solution for your organization. A custom IoT solution typically consists of:
+You can quickly deploy a new IoT Central application and then customize it to your specific requirements. Start with a generic _application template_ or with one of the industry-focused application templates for [Retail](../retail/overview-iot-central-retail.md), [Energy](../energy/overview-iot-central-energy.md), [Government](../government/overview-iot-central-government.md), or [Healthcare](../healthcare/overview-iot-central-healthcare.md).
-- A cloud-based application that receives telemetry from your devices and enables you to manage those devices.-- Multiple devices running custom code connected to your cloud-based application.
+See the [Create a new application](quick-deploy-iot-central.md) quickstart for a walk through of how to create your first application.
-You can quickly deploy a new IoT Central application and then customize it to your specific requirements in your browser. You can start with a generic _application template_ or with one of the industry-focused application templates for [Retail](../retail/overview-iot-central-retail.md), [Energy](../energy/overview-iot-central-energy.md), [Government](../government/overview-iot-central-government.md), or [Healthcare](../healthcare/overview-iot-central-healthcare.md).
+## Connect devices
-As a solution builder, you use the web-based tools to create a _device template_ for the devices that connect to your application. A device template is the blueprint that defines the characteristics and behavior of a type of device such as the:
+After creating your application, the first step is to create an connect devices. Every device connected to IoT Central uses a _device template_. A device template is the blueprint that defines the characteristics and behavior of a type of device such as the:
- Telemetry it sends. Examples include temperature and humidity. Telemetry is streaming data. - Business properties that an operator can modify. Examples include a customer address and a last serviced date.
As a solution builder, you use the web-based tools to create a _device template_
- Properties, that an operator sets, that determine the behavior of the device. For example, a target temperature for the device. - Commands, that an operator can call, that run on a device. For example, a command to remotely reboot a device.
-This [device template](howto-set-up-template.md) includes:
+Every [device template](howto-set-up-template.md) includes:
-- A _device model_ that describes the capabilities a device should implement. The device capabilities include:
+- A _device model_ describing the capabilities a device should implement. The device capabilities include:
- The telemetry it streams to IoT Central. - The read-only properties it uses to report state to IoT Central.
This [device template](howto-set-up-template.md) includes:
- Cloud properties that aren't stored on the device. - Customizations, dashboards, and forms that are part of your IoT Central application.
-### Create device templates
-
-As a solution builder, you have several options for creating device templates:
+You have several options for creating device templates:
- Design the device template in IoT Central and then implement its device model in your device code. - Create a device model using Visual Studio code and publish the model to a repository. Implement your device code from the model, and connect your device to your IoT Central application. IoT Central finds the device model from the repository and creates a simple device template for you. - Create a device model using Visual Studio code. Implement your device code from the model. Manually import the device model into your IoT Central application and then add any cloud properties, customizations, and dashboards your IoT Central application needs.
-As a solution builder, you can use IoT Central to generate code for test devices to validate your device templates.
-
-If you're a device developer, see [IoT Central device development overview](./overview-iot-central-developer.md) for an introduction to implementing devices that use these device templates.
+See the [Add a simulated device](quick-create-simulated-device.md) quickstart for a walk through of how to create and connect your first device.
### Customize the UI
-As a solution builder, you can also customize the IoT Central application UI for the operators who are responsible for the day-to-day use of the application. Customizations that a solution builder can make include:
+You can also customize the IoT Central application UI for the operators who are responsible for the day-to-day use of the application. Customizations you can make include:
-- Defining the layout of properties and settings on a device template. - Configuring custom dashboards to help operators discover insights and resolve issues faster. - Configuring custom analytics to explore time series data from your connected devices.
+- Defining the layout of properties and settings on a device template.
## Manage your devices
As an operator, you use the IoT Central application to [manage the devices](howt
- Troubleshooting and remediating issues with devices. - Provisioning new devices.
-As a solution builder, you can [define custom rules and actions](howto-configure-rules.md) that operate over data streaming from connected devices. An operator can enable or disable these rules at the device level to control and automate tasks within the application.
+You can [define custom rules and actions](howto-configure-rules.md) that operate over data streaming from connected devices. An operator can enable or disable these rules at the device level to control and automate tasks within the application.
-With any IoT solution designed to operate at scale, a structured approach to device management is important. It's not enough just to connect your devices to the cloud, you need to keep your devices connected and healthy. An operator can use the following IoT Central capabilities to manage your devices throughout the application life cycle:
+With any IoT solution designed to operate at scale, a structured approach to device management is important. It's not enough just to connect your devices to the cloud, you need to keep your devices connected and healthy. Use the following IoT Central capabilities to manage your devices throughout the application life cycle:
### Dashboards
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-configure-rules.md
# Quickstart: Configure rules and actions for your device in Azure IoT Central
-*This article applies to operators, builders, and administrators.*
- In this quickstart, you create a rule that sends an email when the humidity reported by a device sensor exceeds 55%. ## Prerequisites
Shortly after you save the rule, it becomes live. When the conditions defined in
> [!NOTE] > After your testing is complete, turn off the rule to stop receiving alerts in your inbox.
-## Clean up resources
-- ## Next steps In this quickstart, you learned how to:
iot-central Quick Create Simulated Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-create-simulated-device.md
# Quickstart: Add a simulated device to your IoT Central application
-*This article applies to operators, builders, and administrators.*
-
-A device template defines the capabilities of a device that connects to your IoT Central application. Capabilities include telemetry the device sends, device properties, and the commands a device responds to. From a device template, a builder or operator can add both real and simulated devices to an application. Simulated devices are useful for testing the behavior of your IoT Central application before you connect real devices.
+A device template defines the capabilities of a device that connects to your IoT Central application. Capabilities include telemetry the device sends, device properties, and the commands a device responds to. Using a device template you can add both real and simulated devices to an application. Simulated devices are useful for testing the behavior of your IoT Central application before you connect real devices.
In this quickstart, you add a device template for an ESP32-Azure IoT Kit development board and create a simulated device. To complete this quickstart you don't need a real device, you work with a simulation of the device. An ESP32 device:
Complete the [Create an Azure IoT Central application](./quick-deploy-iot-centra
## Create a device template
-As a builder, you can create and edit device templates in your IoT Central application. After you publish a device template, you can generate simulated device or connect real devices from the device template. Simulated devices let you test the behavior of your application before you connect a real device.
- To add a new device template to your application, select the **Device Templates** tab in the left pane. :::image type="content" source="media/quick-create-simulated-device/device-definitions.png" alt-text="Screenshot showing empty list of device templates":::
The following steps show you how to use the device catalog to import the model f
1. To add a new device template, select **+ New** on the **Device templates** page.
-1. On the **Select type** page, scroll down until you find the **ESP32-Azure IoT Kit** tile in the **Use a preconfigured device template** section.
+1. On the **Select type** page, scroll down until you find the **ESP32-Azure IoT Kit** tile in the **Use a pre-configured device template** section.
1. Select the **ESP32-Azure IoT Kit** tile, and then select **Next: Review**.
A device template can include cloud properties. Cloud properties only exist in t
## Views
-As a builder, you can customize the application to display relevant information about the device to an operator. Your customizations enable the operator to manage the devices connected to the application. You can create two types of views for an operator to use to interact with devices:
+You can customize the application to display relevant information about the device. Customizations enable other to manage the devices connected to the application. You can create two types of views to interact with devices:
* Forms to view and edit device and cloud properties. * Dashboards to visualize devices including the telemetry they send.
As a builder, you can customize the application to display relevant information
Default views are a quick way to get started with visualizing your important device information. You can have up to three default views generated for your device template:
-* The **Commands** view lets your operator dispatch commands to your device.
+* The **Commands** view lets you dispatch commands to your device.
* The **Overview** view uses charts and metrics to display device telemetry. * The **About** view displays device properties. Select the **Views** node in the device template. You can see that IoT Central generated an **Overview** and an **About** view for you when you added the template.
-To add a new **Manage device** form that an operator can use to manage the device:
+To add a new form to manage the device:
1. Select the **Views** node, and then select the **Editing device and cloud data** tile to add a new view.
To publish a device template:
1. Navigate to your **Sensor Controller** device template from the **Device templates** page.
-1. Select **Publish**:
-
- :::image type="content" source="media/quick-create-simulated-device/published-model.png" alt-text="Screenshot showing location of publish icon":::
+1. Select **Publish** from the command bar at the top of the page.
-1. On the **Publish this device template to the application** dialog, select **Publish**.
+1. On the dialog that appears, select **Publish**.
-After you publish a device template, it's visible on the **Devices** page. In a published device template, you can't edit a device model without creating a new version. However, you can modify cloud properties, customizations, and views in a published device template without versioning. After making any changes, select **Publish** to push those changes out to your operator.
+After you publish a device template, it's visible on the **Devices** page. In a published device template, you can't edit a device model without creating a new version. However, you can modify cloud properties, customizations, and views in a published device template without versioning. After making any changes, select **Publish** to push those changes for real and simulated devices to use.
## Add a simulated device To add a simulated device to your application, you use the **ESP32** device template you created.
-1. To add a new device as an operator choose **Devices** in the left pane. The **Devices** tab shows **All devices** and the **Sensor Controller** device template for the ESP32 device. Select **Sensor Controller**.
+1. To add a new device choose **Devices** in the left pane. The **Devices** tab shows **All devices** and the **Sensor Controller** device template for the ESP32 device. Select **Sensor Controller**.
1. To add a simulated DevKit device, select **+ New**. Use the suggested **Device ID** or enter your own. A device ID can contain letters, numbers, and the `-` character. You can also enter a name for your new device. Make sure the **Simulate this device** is set to **Yes** and then select **Create**. :::image type="content" source="media/quick-create-simulated-device/simulated-device.png" alt-text="Screenshot that shows the simulated Sensor Controller device":::
-Now you can interact with the views that were created by the builder for the device template using simulated data:
+Now you can interact with the views that created earlier using simulated data:
1. Select your simulated device on the **Devices** page
Now you can interact with the views that were created by the builder for the dev
* The **Commands** view lets you run commands, such as **reboot** on the device.
- * The **Manage devices** view is the form you created for the operator to manage the device.
+ * The **Manage devices** view is the form you created to manage the device.
* The **Raw data** view lets you view the raw telemetry and property values sent by the device. This view is useful for debugging devices.
-## Use a simulated device to improve views
-
-After you create a new simulated device, the builder can use this device to continue to improve and build upon the views for the device template.
-
-1. Choose **Device templates** in the left pane and select the **Sensor Controller** template.
-
-1. Select any of the views you'd like to edit such as **Overview**, or create a new view. Select **Configure preview device**, then **Select from a running device**. Here you can choose to have no preview device, a real device configured for testing, or an existing device you've added into IoT Central.
-
-1. Choose your simulated device in the list. Then select **Apply**. Now you can see the same simulated device in your device template views building experience. This view is useful for charts and other visualizations.
-
- :::image type="content" source="media/quick-create-simulated-device/configure-preview.png" alt-text="Screenshot showing a configured preview device":::
-
-## Clean up resources
-- ## Next steps In this quickstart, you learned how to you create a **Sensor Controller** device template for an ESP32 device and add a simulated device to your application.
iot-develop Concepts Overview Connection Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/concepts-overview-connection-options.md
After you select IoT Hub or IoT Central to host your IoT application, you have s
||||| |Central web UI | Central | [Central quickstart](../iot-central/core/quick-deploy-iot-central.md) | Browser-based portal for IoT Central. | |Azure portal | Hub, Central | [Create an IoT hub with Azure portal](../iot-hub/iot-hub-create-through-portal.md), [Manage IoT Central from the Azure portal](../iot-central/core/howto-manage-iot-central-from-portal.md)| Browser-based portal for IoT Hub and devices. Also works with other Azure resources including IoT Central. |
+|Azure IoT Explorer | Hub | [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer#azure-iot-explorer-preview) | Cannot create IoT hubs. Connects to an existing IoT hub to manage devices. Often used with CLI or Portal.|
|Azure CLI | Hub, Central | [Create an IoT hub with CLI](../iot-hub/iot-hub-create-using-cli.md), [Manage IoT Central from Azure CLI](../iot-central/core/howto-manage-iot-central-from-cli.md) | Command-line interface for creating and managing IoT applications. | |Azure PowerShell | Hub, Central | [Create an IoT hub with PowerShell](../iot-hub/iot-hub-create-using-powershell.md), [Manage IoT Central from Azure PowerShell](../iot-central/core/howto-manage-iot-central-from-powershell.md) | PowerShell interface for creating and managing IoT applications | |Azure IoT Tools for VS Code | Hub | [Create an IoT hub with Tools for VS Code](../iot-hub/iot-hub-create-use-iot-toolkit.md) | VS Code extension for IoT Hub applications. |
-|Azure IoT Explorer | Hub | [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer) | Cannot create IoT hubs. Connects to an existing IoT hub to manage devices. Often used with CLI or Portal.|
## Next steps To learn more about your options for connecting devices to Azure IoT, explore the following quickstarts:
iot-develop Quickstart Device Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-device-development.md
The following tutorials are included in the getting started guide:
## Next steps After you complete a device-specific quickstart in this guide, explore the other device-specific articles and samples in the Azure RTOS getting started repo:
-* [Getting started with Azure RTOS and Azure IoT](https://github.com/azure-rtos/getting-started)
+* [Getting started with Azure RTOS and Azure IoT](https://github.com/azure-rtos/getting-started#getting-started-with-azure-rtos-and-azure-iot)
iot-dps Concepts Device Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/concepts-device-reprovision.md
Title: Azure IoT Hub Device Provisioning Service - Device concepts
description: Describes device reprovisioning concepts for the Azure IoT Hub Device Provisioning Service (DPS) Previously updated : 04/04/2019 Last updated : 04/16/2021
Depending on the scenario, a device usually sends a request to a provisioning se
* **Never re-provision**: The device is never reassigned to a different hub. This policy is provided for managing backwards compatibility.
+> [!NOTE]
+> DPS will always call the custom allocation webhook regardless of re-provisioning policy in case there is new [ReturnData](how-to-send-additional-data.md) for the device. If the re-provisioning policy is set to **never re-provision**, the webhook will be called but the device will not change its assigned hub.
+ ### Managing backwards compatibility Before September 2018, device assignments to IoT hubs had a sticky behavior. When a device went back through the provisioning process, it would only be assigned back to the same IoT hub.
iot-edge How To Install Iot Edge Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-install-iot-edge-kubernetes.md
Last updated 04/26/2019
+monikerRange: "iotedge-2018-06"
# How to install IoT Edge on Kubernetes (Preview) IoT Edge can integrate with Kubernetes using it as a resilient, highly available infrastructure layer. Here is where this support fits in a high level IoT Edge solution:
iot-edge Module Edgeagent Edgehub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/module-edgeagent-edgehub.md
description: Review the specific properties and their values for the edgeAgent a
Previously updated : 08/31/2020 Last updated : 04/16/2021
The following table does not include the information that is copied from the des
| lastDesiredVersion | This integer refers to the last version of the desired properties processed by the IoT Edge agent. | | lastDesiredStatus.code | This status code refers to the last desired properties seen by the IoT Edge agent. Allowed values: `200` Success, `400` Invalid configuration, `412` Invalid schema version, `417` the desired properties are empty, `500` Failed | | lastDesiredStatus.description | Text description of the status |
-| deviceHealth | `healthy` if the runtime status of all modules is either `running` or `stopped`, `unhealthy` otherwise |
| configurationHealth.{deploymentId}.health | `healthy` if the runtime status of all modules set by the deployment {deploymentId} is either `running` or `stopped`, `unhealthy` otherwise | | runtime.platform.OS | Reporting the OS running on the device | | runtime.platform.architecture | Reporting the architecture of the CPU on the device |
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/support.md
description: Learn which operating systems can run the Azure IoT Edge daemon and
Previously updated : 04/09/2021 Last updated : 04/16/2021
The following table lists the components included in each release up to the 1.1
| Release | iotedge | edgeHub<br>edgeAgent | libiothsm | moby | |--|--|--|--|--|
-| **1.1 LTS**<sup>1</sup> | 1.1.0<br>1.1.1 | 1.1.0<br>1.1.1 | 1.1.0<br>1.1.1 | |
+| **1.1 LTS**<sup>1</sup> | 1.1.0<br>1.1.1<br><br> | 1.1.0<br>1.1.1<br>1.1.2 | 1.1.0<br>1.1.1<br><br> | |
| **1.0.10** | 1.0.10<br>1.0.10.1<br>1.0.10.2<br><br>1.0.10.4 | 1.0.10<br>1.0.10.1<br>1.0.10.2<br>1.0.10.3<br>1.0.10.4 | 1.0.10<br>1.0.10.1<br>1.0.10.2<br><br>1.0.10.4 | | | **1.0.9** | 1.0.9<br>1.0.9.1<br>1.0.9.2<br>1.0.9.3<br>1.0.9.4<br>1.0.9.5 | 1.0.9<br>1.0.9.1<br>1.0.9.2<br>1.0.9.3<br>1.0.9.4<br>1.0.9.5 | 1.0.9<br>1.0.9.1<br>1.0.9.2<br>1.0.9.3<br>1.0.9.4<br>1.0.9.5 | | | **1.0.8** | 1.0.8 | 1.0.8<br>1.0.8.1<br>1.0.8.2<br>1.0.8.3<br>1.0.8.4<br>1.0.8.5 | 1.0.8 | 3.0.6 |
iot-hub-device-update Device Update Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-security.md
Title: Security for Device Update for Azure IoT Hub | Microsoft Docs
description: Understand how Device Update for IoT Hub ensures devices are updated securely. Previously updated : 2/11/2021 Last updated : 4/15/2021
Device Update for IoT Hub offers a secure method to deploy updates for device firmware, images, and applications to your IoT devices. The workflow provides an end-to-end secure channel with a full chain-of-custody model that a device can use to prove an update is trusted, unmodified and intentional.
-Each step in the Device Update workflow is protected through various security features and processes to ensure that every step in the pipeline performs a secured handoff to the next. The Device Update client identifies and properly manages any illegitimate update requests. The client also checks every download to ensure that the content is trusted, unmodified, and is intentional.
+Each step in the Device Update workflow is protected through various security features and processes to ensure that every step in the pipeline performs a secured handoff to the next. The Device Update client identifies and properly manages any illegitimate update requests. The client also checks every download to ensure that the content is trusted, unmodified, and intentional.
## For Solution Operators As Solution Operators import updates into their Device Update instance, the service uploads and checks the update binary files to ensure that they haven't been modified or swapped out by a malicious user. Once verified, the Device Update service generates an internal [update manifest](./update-manifest.md) with file hashes from the import manifest and other metadata. This update manifest is then signed by the Device Update service.
+Once ingested into the service and stored in Azure, the update binary files and associated customer metadata are automatically encrypted at rest by the Azure storage service. The Device Update service does not automatically provide additional encryption, but does allow developers to encrypt content themselves before the content reaches the Device Update service.
+ When the Solution Operator requests to update a device, a signed message is sent over the protected IoT Hub channel to the device. The requestΓÇÖs signature is validated by the deviceΓÇÖs Device Update agent as authentic.
-Any resulting binary download is secured through validation of the update manifest signature. The update manifest contains the binary file hashes, so once the manifest is trusted the Device Update agent trusts the hashes and matches them against the binaries. Once the update binary has been downloaded and verified, it is then handed off to the installer on the device.
+Any resulting binary download is secured through validation of the update manifest signature. The update manifest contains the binary file hashes, so once the manifest is trusted the Device Update agent trusts the hashes and matches them against the binaries. Once the update binary has been downloaded and verified, it is then securely handed off to the installer on the device.
## For Device Builders
iot-hub Iot Hub Devguide Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-security.md
Previously updated : 07/18/2018 Last updated : 04/15/2021
Supported certificates include:
A device may either use an X.509 certificate or a security token for authentication, but not both. With X.509 certificate authentication, make sure you have a strategy in place to handle certificate rollover when an existing certificate expires.
-The following functionality is not supported for devices that use X.509 CA authentication:
+The following functionality for devices that use X.509 certificate authority (CA) authentication is not yet generally available, and [preview mode must be enabled](iot-hub-preview-mode.md):
* HTTPS, MQTT over WebSockets, and AMQP over WebSockets protocols. * File uploads (all protocols).
iot-hub Iot Hub Device Management Iot Extension Azure Cli 2 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-management-iot-extension-azure-cli-2-0.md
![End-to-end diagram](media/iot-hub-get-started-e2e-diagram/2.png) -
-[The IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) is an open-source IoT extension that adds to the capabilities of the [Azure CLI](/cli/azure/overview). The Azure CLI includes commands for interacting with Azure Resource Manager and management endpoints. For example, you can use Azure CLI to create an Azure VM or an IoT hub. A CLI extension enables an Azure service to augment the Azure CLI giving you access to additional service-specific capabilities. The IoT extension gives IoT developers command-line access to all IoT Hub, IoT Edge, and IoT Hub Device Provisioning Service capabilities.
--
+In this article, you learn how to use the IoT extension for Azure CLI with various management options on your development machine. [The IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension) is an open-source IoT extension that adds to the capabilities of the [Azure CLI](/cli/azure/overview). The Azure CLI includes commands for interacting with Azure Resource Manager and management endpoints. For example, you can use Azure CLI to create an Azure VM or an IoT hub. A CLI extension enables an Azure service to augment the Azure CLI giving you access to additional service-specific capabilities. The IoT extension gives IoT developers command-line access to all IoT Hub, IoT Edge, and IoT Hub Device Provisioning Service capabilities.
| Management option | Task | |-|--|
For more detailed explanation on the differences and guidance on using these opt
Device twins are JSON documents that store device state information (metadata, configurations, and conditions). IoT Hub persists a device twin for each device that connects to it. For more information about device twins, see [Get started with device twins](iot-hub-node-node-twin-getstarted.md).
-## What you learn
-
-You learn to use the IoT extension for Azure CLI with various management options on your development machine.
-## What you do
-Run Azure CLI and the IoT extension for Azure CLI with various management options.
-## What you need
+## Prerequisites
-* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials; for example, [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md). These items cover the following requirements:
+* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials. For example, you can go to [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md) or to one of the [Send telemetry](quickstart-send-telemetry-dotnet.md) quickstarts. These articles cover the following requirements:
- - An active Azure subscription.
- - An Azure IoT hub under your subscription.
- - A client application that sends messages to your Azure IoT hub.
+ * An active Azure subscription.
+ * An Azure IoT hub under your subscription.
+ * A client application that sends messages to your Azure IoT hub.
* Make sure your device is running with the client application during this tutorial.
iot-hub Iot Hub Device Management Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-management-iot-toolkit.md
![End-to-end diagram](media/iot-hub-get-started-e2e-diagram/2.png)
-[Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) is a useful Visual Studio Code extension that makes IoT Hub management and IoT application development easier. It comes with management options that you can use to perform various tasks.
+In this article, you learn how to use Azure IoT Tools for Visual Studio Code with various management options on your development machine. [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) is a useful Visual Studio Code extension that makes IoT Hub management and IoT application development easier. It comes with management options that you can use to perform various tasks.
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
Device twins are JSON documents that store device state information (metadata, c
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-## What you learn
-
-You learn using Azure IoT Tools for Visual Studio Code with various management options on your development machine.
-
-## What you do
-
-Run Azure IoT Tools for Visual Studio Code with various management options.
-
-## What you need
+## Prerequisites
* An active Azure subscription. * An Azure IoT hub under your subscription.
iot-hub Iot Hub Device Management Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-management-visual-studio.md
![End-to-end diagram](media/iot-hub-device-management-visual-studio/iot-e2e-simple.png)
-[Cloud Explorer](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.CloudExplorerForVS) is a useful Visual Studio extension that enables you to view your Azure resources, inspect their properties and perform key developer actions from within Visual Studio. It comes with management options that you can use to perform various tasks.
+In this article, you learn how to use the Cloud Explorer for Visual Studio with various management options on your development computer. [Cloud Explorer](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.CloudExplorerForVS) is a useful Visual Studio extension that enables you to view your Azure resources, inspect their properties and perform key developer actions from within Visual Studio. It comes with management options that you can use to perform various tasks.
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-whole.md)]
For more detailed explanation on the differences and guidance on using these opt
Device twins are JSON documents that store device state information, including metadata, configurations, and conditions. IoT Hub persists a device twin for each device that connects to it. For more information about device twins, see [Get started with device twins](iot-hub-node-node-twin-getstarted.md).
-## What you learn
-
-In this article, you learn how to use the Cloud Explorer for Visual Studio with various management options on your development computer.
-
-## What you do
-
-In this article, run Cloud Explorer for Visual Studio with various management options.
-
-## What you need
-
-You need the following prerequisites:
+## Prerequisites
- An active Azure subscription.
You need the following prerequisites:
- Microsoft Visual Studio 2017 Update 9 or later. This article uses [Visual Studio 2017 or Visual Studio 2019](https://www.visualstudio.com/vs/). -- Cloud Explorer component from Visual Studio Installer, which selected by default with Azure Workload.
+- Cloud Explorer component from Visual Studio Installer, which is selected by default with Azure Workload.
## Update Cloud Explorer to latest version
iot-hub Iot Hub Live Data Visualization In Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-live-data-visualization-in-power-bi.md
[!INCLUDE [iot-hub-get-started-note](../../includes/iot-hub-get-started-note.md)]
-## What you learn
+In this article, you learn how to visualize real-time sensor data that your Azure IoT hub receives by using Power BI. If you want to try to visualize the data in your IoT hub with a web app, see [Use a web app to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-web-apps.md).
-You learn how to visualize real-time sensor data that your Azure IoT hub receives by using Power BI. If you want to try to visualize the data in your IoT hub with a web app, see [Use a web app to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-web-apps.md).
+## Prerequisites
-## What you do
-
-* Get your IoT hub ready for data access by adding a consumer group.
-
-* Create, configure, and run a Stream Analytics job for data transfer from your IoT hub to your Power BI account.
-
-* Create and publish a Power BI report to visualize the data.
-
-## What you need
-
-* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials; for example, [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md). These articles cover the following requirements:
+* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials. For example, you can go to [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md) or to one of the [Send telemetry](quickstart-send-telemetry-dotnet.md) quickstarts. These articles cover the following requirements:
* An active Azure subscription.
- * An Azure IoT hub under your subscription.
+ * An Azure IoT hub in your subscription.
* A client application that sends messages to your Azure IoT hub. * A Power BI account. ([Try Power BI for free](https://powerbi.microsoft.com/))
iot-hub Iot Hub Live Data Visualization In Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-live-data-visualization-in-web-apps.md
[!INCLUDE [iot-hub-get-started-note](../../includes/iot-hub-get-started-note.md)]
-## What you learn
+In this article, you learn how to visualize real-time sensor data that your IoT hub receives with a node.js web app running on your local computer. After running the web app locally, you can optionally follow steps to host the web app in Azure App Service. If you want to try to visualize the data in your IoT hub by using Power BI, see [Use Power BI to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-power-bi.md).
-In this tutorial, you learn how to visualize real-time sensor data that your IoT hub receives with a node.js web app running on your local computer. After running the web app locally, you can optionally follow steps to host the web app in Azure App Service. If you want to try to visualize the data in your IoT hub by using Power BI, see [Use Power BI to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-power-bi.md).
+## Prerequisites
-## What you do
-
-* Add a consumer group to your IoT hub that the web application will use to read sensor data
-* Download the web app code from GitHub
-* Examine the web app code
-* Configure environment variables to hold the IoT Hub artifacts needed by your web app
-* Run the web app on your development machine
-* Open a web page to see real-time temperature and humidity data from your IoT hub
-* (Optional) Use Azure CLI to host your web app in Azure App Service
-
-## What you need
-
-* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials; for example, [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md). These cover the following requirements:
+* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials. For example, you can go to [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md) or to one of the [Send telemetry](quickstart-send-telemetry-dotnet.md) quickstarts. These articles cover the following requirements:
* An active Azure subscription * An Iot hub under your subscription
iot-hub Iot Hub Monitoring Notifications With Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md
[Azure Logic Apps](../logic-apps/index.yml) can help you orchestrate workflows across on-premises and cloud services, one or more enterprises, and across various protocols. A logic app begins with a trigger, which is then followed by one or more actions that can be sequenced using built-in controls, such as conditions and iterators. This flexibility makes Logic Apps an ideal IoT solution for IoT monitoring scenarios. For example, the arrival of telemetry data from a device at an IoT Hub endpoint can initiate logic app workflows to warehouse the data in an Azure Storage blob, send email alerts to warn of data anomalies, schedule a technician visit if a device reports a failure, and so on.
-## What you learn
-
-You learn how to create a logic app that connects your IoT hub and your mailbox for temperature monitoring and notifications.
-
-The client code running on your device sets an application property, `temperatureAlert`, on every telemetry message it sends to your IoT hub. When the client code detects a temperature above 30 C, it sets this property to `true`; otherwise, it sets the property to `false`.
+In this article, you learn how to create a logic app that connects your IoT hub and your mailbox for temperature monitoring and notifications. The client code running on your device sets an application property, `temperatureAlert`, on every telemetry message it sends to your IoT hub. When the client code detects a temperature above 30 C, it sets this property to `true`; otherwise, it sets the property to `false`.
Messages arriving at your IoT hub look similar to the following, with the telemetry data contained in the body and the `temperatureAlert` property contained in the application properties (system properties are not shown):
To learn more about IoT Hub message format, see [Create and read IoT Hub message
In this topic, you set up routing on your IoT hub to send messages in which the `temperatureAlert` property is `true` to a Service Bus endpoint. You then set up a logic app that triggers on the messages arriving at the Service Bus endpoint and sends you an email notification.
-## What you do
-
-* Create a Service Bus namespace and add a Service Bus queue to it.
-* Add a custom endpoint and a routing rule to your IoT hub to route messages that contain a temperature alert to the Service Bus queue.
-* Create, configure, and test a logic app to consume messages from your Service Bus queue and send notification emails to a desired recipient.
-
-## What you need
+## Prerequisites
-* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials; for example, [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md). These cover the following requirements:
+* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials. For example, you can go to [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md) or to one of the [Send telemetry](quickstart-send-telemetry-dotnet.md) quickstarts. These articles cover the following requirements:
* An active Azure subscription. * An Azure IoT hub under your subscription.
iot-hub Iot Hub Visual Studio Cloud Device Messaging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-visual-studio-cloud-device-messaging.md
![End-to-end diagram](./media/iot-hub-visual-studio-cloud-device-messaging/e-to-e-diagram.png)
-[Cloud Explorer](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.CloudExplorerForVS) is a useful Visual Studio extension that enables you to view your Azure resources, inspect their properties and perform key developer actions from within Visual Studio. This article focuses on how to use Cloud Explorer to send and receive messages between your device and your hub.
--
-## What you learn
- In this article, you learn how to use Cloud Explorer for Visual Studio to monitor device-to-cloud messages and to send cloud-to-device messages. Device-to-cloud messages could be sensor data that your device collects and then sends to your IoT Hub. Cloud-to-device messages could be commands that your IoT Hub sends to your device. For example, blink an LED that is connected to your device.
-## What you do
-
-In this article, you do the following tasks:
--- Use Cloud Explorer for Visual Studio to monitor device-to-cloud messages.--- Use Cloud Explorer for Visual Studio to send cloud-to-device messages.
+[Cloud Explorer](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.CloudExplorerForVS) is a useful Visual Studio extension that enables you to view your Azure resources, inspect their properties and perform key developer actions from within Visual Studio. This article focuses on how to use Cloud Explorer to send and receive messages between your device and your hub.
-## What you need
-You need the following prerequisites:
+## Prerequisites
- An active Azure subscription.
iot-hub Iot Hub Vscode Iot Toolkit Cloud Device Messaging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-vscode-iot-toolkit-cloud-device-messaging.md
![End-to-end diagram](./media/iot-hub-vscode-iot-toolkit-cloud-device-messaging/e-to-e-diagram.png)
+In this article, you learn how to use Azure IoT Tools for Visual Studio Code to monitor device-to-cloud messages and to send cloud-to-device messages. Device-to-cloud messages could be sensor data that your device collects and then sends to your IoT hub. Cloud-to-device messages could be commands that your IoT hub sends to your device to blink an LED that is connected to your device.
+ [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) is a useful Visual Studio Code extension that makes IoT Hub management and IoT application development easier. This article focuses on how to use Azure IoT Tools for Visual Studio Code to send and receive messages between your device and your IoT hub. [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
-## What you will learn
-
-You learn how to use Azure IoT Tools for Visual Studio Code to monitor device-to-cloud messages and to send cloud-to-device messages. Device-to-cloud messages could be sensor data that your device collects and then sends to your IoT hub. Cloud-to-device messages could be commands that your IoT hub sends to your device to blink an LED that is connected to your device.
-
-## What you will do
-
-* Use Azure IoT Tools for Visual Studio Code to monitor device-to-cloud messages.
-
-* Use Azure IoT Tools for Visual Studio Code to send cloud-to-device messages.
-
-## What you need
+## Prerequisites
* An active Azure subscription.
iot-hub Iot Hub Weather Forecast Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-weather-forecast-machine-learning.md
[!INCLUDE [iot-hub-get-started-note](../../includes/iot-hub-get-started-note.md)]
-Machine learning is a technique of data science that helps computers learn from existing data to forecast future behaviors, outcomes, and trends. Azure Machine Learning Studio (classic) is a cloud predictive analytics service that makes it possible to quickly create and deploy predictive models as analytics solutions.
+Machine learning is a technique of data science that helps computers learn from existing data to forecast future behaviors, outcomes, and trends. Azure Machine Learning Studio (classic) is a cloud predictive analytics service that makes it possible to quickly create and deploy predictive models as analytics solutions. In this article, you learn how to use Azure Machine Learning Studio (classic) to do weather forecasting (chance of rain) using the temperature and humidity data from your Azure IoT hub. The chance of rain is the output of a prepared weather prediction model. The model is built upon historic data to forecast chance of rain based on temperature and humidity.
-## What you learn
+## Prerequisites
-You learn how to use Azure Machine Learning Studio (classic) to do weather forecast (chance of rain) using the temperature and humidity data from your Azure IoT hub. The chance of rain is the output of a prepared weather prediction model. The model is built upon historic data to forecast chance of rain based on temperature and humidity.
-
-## What you do
--- Deploy the weather prediction model as a web service.-- Get your IoT hub ready for data access by adding a consumer group.-- Create a Stream Analytics job and configure the job to:
- - Read temperature and humidity data from your IoT hub.
- - Call the web service to get the rain chance.
- - Save the result to an Azure blob storage.
-- Use Microsoft Azure Storage Explorer to view the weather forecast.-
-## What you need
--- Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials; for example, [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md). These cover the following requirements:
+- Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials. For example, you can go to [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md) or to one of the [Send telemetry](quickstart-send-telemetry-dotnet.md) quickstarts. These articles cover the following requirements:
- An active Azure subscription. - An Azure IoT hub under your subscription. - A client application that sends messages to your Azure IoT hub.
iot-hub Quickstart Control Device Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-control-device-python.md
In this quickstart, you use a direct method to control a simulated device connec
* [Python 3.7+](https://www.python.org/downloads/). For other versions of Python supported, see [Azure IoT Device Features](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device#azure-iot-device-features).
-* [A sample Python project](https://github.com/Azure-Samples/azure-iot-samples-python/archive/master.zip).
+* [A sample Python project](https://github.com/Azure-Samples/azure-iot-samples-python/) from github. Download or clone the samples by using the **Code** button in the github repository.
* Port 8883 open in your firewall. The device sample in this quickstart uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
iot-hub Quickstart Send Telemetry Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-send-telemetry-python.md
In this quickstart, you send telemetry from a simulated device application throu
* [Python 3.7+](https://www.python.org/downloads/). For other versions of Python supported, see [Azure IoT Device Features](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device#azure-iot-device-features).
-* [A sample Python project](https://github.com/Azure-Samples/azure-iot-samples-python/archive/master.zip).
+* [A sample Python project](https://github.com/Azure-Samples/azure-iot-samples-python/) from github. Download or clone the samples by using the **Code** button in the github repository.
* Port 8883 open in your firewall. The device sample in this quickstart uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
A device must be registered with your IoT hub before it can connect. In this qui
The simulated device application connects to a device-specific endpoint on your IoT hub and sends simulated temperature and humidity telemetry.
+1. Download or clone the azure-iot-samples-python repository using the **Code** button on the [azure-iot-samples-python repository page](https://github.com/Azure-Samples/azure-iot-samples-python/).
+ 1. In a local terminal window, navigate to the root folder of the sample Python project. Then navigate to the **iot-hub\Quickstarts\simulated-device** folder. 1. Open the **SimulatedDevice.py** file in a text editor of your choice.
iot-hub Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/security-baseline.md
Deploy the firewall solution of your choice at each of your organization's netwo
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Devices**:
key-vault Tutorial Net Create Vault Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/tutorial-net-create-vault-azure-web-app.md
To complete this tutorial, you need:
* An Azure subscription. [Create one for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) * The [.NET Core 3.1 SDK (or later)](https://dotnet.microsoft.com/download/dotnet-core/3.1).
-* A [Git](https://www.git-scm.com/downloads) installation.
+* A [Git](https://www.git-scm.com/downloads) installation of version 2.28.0 or greater.
* The [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/). * [Azure Key Vault.](./overview.md) You can create a key vault by using the [Azure portal](quick-create-portal.md), the [Azure CLI](quick-create-cli.md), or [Azure PowerShell](quick-create-powershell.md). * A Key Vault [secret](../secrets/about-secrets.md). You can create a secret by using the [Azure portal](../secrets/quick-create-portal.md), [PowerShell](../secrets/quick-create-powershell.md), or the [Azure CLI](../secrets/quick-create-cli.md).
In this step, you'll deploy your .NET Core application to Azure App Service by u
In the terminal window, select **Ctrl+C** to close the web server. Initialize a Git repository for the .NET Core project: ```bash
-git init
+git init --initial-branch=main
git add . git commit -m "first commit" ```
Local git is configured with url of 'https://&lt;username&gt;@&lt;your-webapp-na
} </pre> - The URL of the Git remote is shown in the `deploymentLocalGitUrl` property, in the format `https://<username>@<your-webapp-name>.scm.azurewebsites.net/<your-webapp-name>.git`. Save this URL. You'll need it later.
+Now configure your web app to deploy from the `main` branch:
+
+```azurecli-interactive
+ az webapp config appsettings set -g MyResourceGroup -name "<your-webapp-name>"--settings deployment_branch=main
+```
+ Go to your new app by using the following command. Replace `<your-webapp-name>` with your app name. ```bash
key-vault Hsm Protected Keys Ncipher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/hsm-protected-keys-ncipher.md
# Import HSM-protected keys for Key Vault (nCipher) > [!WARNING]
-> The HSM-key import method described in this document is **deprecated** and will not be supported in future. It only works with nCipher nShield family of HSMs with firmware 12.40.2 or 12.50 with a hotfix. Using [new method to import HSM-keys](hsm-protected-keys-byok.md) is strongly recommended.
+> The HSM-key import method described in this document is **deprecated** and will not be supported after June 30, 2021. It only works with nCipher nShield family of HSMs with firmware 12.40.2 or 12.50 with a hotfix. Using [new method to import HSM-keys](hsm-protected-keys-byok.md) is strongly recommended.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
key-vault Hsm Protected Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/hsm-protected-keys.md
Transferring HSM-protected keys to Key Vault is supported via two different meth
|Vendor Name|Vendor Type|Supported HSM models|Supported HSM-key transfer method| |||||
-|[nCipher](https://www.ncipher.com/products/key-management/cloud-microsoft-azure)|Manufacturer,<br/>HSM as a Service|<ul><li>nShield family of HSMs</li><li>nShield as a service</ul>|**Method 1:** [nCipher BYOK](hsm-protected-keys-ncipher.md) (deprecated)<br/>**Method 2:** [Use new BYOK method](hsm-protected-keys-byok.md) (recommended)|
+|[nCipher](https://www.ncipher.com/products/key-management/cloud-microsoft-azure)|Manufacturer,<br/>HSM as a Service|<ul><li>nShield family of HSMs</li><li>nShield as a service</ul>|**Method 1:** [nCipher BYOK](hsm-protected-keys-ncipher.md) (deprecated). This method will not be supported after <strong>June 30, 2021</strong><br/>**Method 2:** [Use new BYOK method](hsm-protected-keys-byok.md) (recommended)|
|Thales|Manufacturer|<ul><li>Luna HSM 7 family with firmware version 7.3 or newer</li></ul>| [Use new BYOK method](hsm-protected-keys-byok.md)| |Fortanix|Manufacturer,<br/>HSM as a Service|<ul><li>Self-Defending Key Management Service (SDKMS)</li><li>Equinix SmartKey</li></ul>|[Use new BYOK method](hsm-protected-keys-byok.md)| |Marvell|Manufacturer|All LiquidSecurity HSMs with<ul><li>Firmware version 2.0.4 or later</li><li>Firmware version 3.2 or newer</li></ul>|[Use new BYOK method](hsm-protected-keys-byok.md)|
load-balancer Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/security-baseline.md
Using a Standard Load Balancer is recommended for your production workloads and
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
Also send the flow logs to a Log Analytics workspace and then use Traffic Analyt
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
Use Security Center's Adaptive Network Hardening feature to recommend network se
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
Use Security Center's Adaptive Network Hardening feature to recommend network se
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Network**:
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 04/05/2021 Last updated : 04/16/2021 # Limits and configuration information for Azure Logic Apps
This section lists the outbound IP addresses for the Azure Logic Apps service an
| UK South | 51.140.74.14, 51.140.73.85, 51.140.78.44, 51.140.137.190, 51.140.153.135, 51.140.28.225, 51.140.142.28, 51.140.158.24 | 51.140.74.150, 51.140.80.51, 51.140.61.124, 51.105.77.96 - 51.105.77.127, 51.140.148.0 - 51.140.148.15 | | UK West | 51.141.54.185, 51.141.45.238, 51.141.47.136, 51.141.114.77, 51.141.112.112, 51.141.113.36, 51.141.118.119, 51.141.119.63 | 51.141.52.185, 51.141.47.105, 51.141.124.13, 51.140.211.0 - 51.140.211.15, 51.140.212.224 - 51.140.212.255 | | West Central US | 52.161.27.190, 52.161.18.218, 52.161.9.108, 13.78.151.161, 13.78.137.179, 13.78.148.140, 13.78.129.20, 13.78.141.75 | 52.161.101.204, 52.161.102.22, 13.78.132.82, 13.71.195.32 - 13.71.195.47, 13.71.199.192 - 13.71.199.223 |
-| West Europe | 40.68.222.65, 40.68.209.23, 13.95.147.65, 23.97.218.130, 51.144.182.201, 23.97.211.179, 104.45.9.52, 23.97.210.126 | 52.166.78.89, 52.174.88.118, 40.91.208.65, 13.69.64.208 - 13.69.64.223, 13.69.71.192 - 13.69.71.223 |
+| West Europe | 40.68.222.65, 40.68.209.23, 13.95.147.65, 23.97.218.130, 51.144.182.201, 23.97.211.179, 104.45.9.52, 23.97.210.126, 13.69.71.160, 13.69.71.161, 13.69.71.162, 13.69.71.163, 13.69.71.164, 13.69.71.165, 13.69.71.166, 13.69.71.167 | 52.166.78.89, 52.174.88.118, 40.91.208.65, 13.69.64.208 - 13.69.64.223, 13.69.71.192 - 13.69.71.223 |
| West India | 104.211.164.80, 104.211.162.205, 104.211.164.136, 104.211.158.127, 104.211.156.153, 104.211.158.123, 104.211.154.59, 104.211.154.7 | 104.211.189.124, 104.211.189.218, 20.38.128.224 - 20.38.128.255, 104.211.146.224 - 104.211.146.239 |
-| West US | 52.160.92.112, 40.118.244.241, 40.118.241.243, 157.56.162.53, 157.56.167.147, 104.42.49.145, 40.83.164.80, 104.42.38.32 | 13.93.148.62, 104.42.122.49, 40.112.195.87, 13.86.223.32 - 13.86.223.63, 40.112.243.160 - 40.112.243.175 |
+| West US | 52.160.92.112, 40.118.244.241, 40.118.241.243, 157.56.162.53, 157.56.167.147, 104.42.49.145, 40.83.164.80, 104.42.38.32, 13.86.223.0, 13.86.223.1, 13.86.223.2, 13.86.223.3, 13.86.223.4, 13.86.223.5 | 13.93.148.62, 104.42.122.49, 40.112.195.87, 13.86.223.32 - 13.86.223.63, 40.112.243.160 - 40.112.243.175 |
| West US 2 | 13.66.210.167, 52.183.30.169, 52.183.29.132, 13.66.210.167, 13.66.201.169, 13.77.149.159, 52.175.198.132, 13.66.246.219 | 52.191.164.250, 52.183.78.157, 13.66.140.128 - 13.66.140.143, 13.66.145.96 - 13.66.145.127 | ||||
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
After you create and [register](#register-datasets) your dataset, you can load i
If you don't need to do any data wrangling or exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](how-to-train-with-datasets.md). ### Filter datasets (preview)+ Filtering capabilities depends on the type of dataset you have. > [!IMPORTANT] > Filtering datasets with the public preview method, [`filter()`](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-data-prep-synapse-spark-pool.md
Last updated 03/02/2021
-# Customer intent: As a data scientist, I want to prepare my data at scale and to train my machine learning models from a single notebook.
+# Customer intent: As a data scientist, I want to prepare my data at scale, and to train my machine learning models from a single notebook using Azure Machine Learning.
# Attach Apache Spark pools (powered by Azure Synapse Analytics) for data wrangling (preview)
-In this article, you learn how to attach and launch an Apache Spark pool powered by [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) for data wrangling at scale.
+In this article, you learn how to attach an Apache Spark pool powered by [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) to your Azure Machine learning workspace, so you can launch it and perform data wrangling at scale.
-This article contains guidance for performing data wrangling tasks interactively within a dedicated Synapse session in a Jupyter notebook. If you prefer to use Azure Machine Learning pipelines, see [How to use Apache Spark (powered by Azure Synapse Analytics) in your machine learning pipeline (preview)](how-to-use-synapsesparkstep.md).
+This article contains guidance for performing data wrangling tasks interactively within a dedicated Synapse session in a Jupyter notebook using the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/). If you prefer to use Azure Machine Learning pipelines, see [How to use Apache Spark (powered by Azure Synapse Analytics) in your machine learning pipeline (preview)](how-to-use-synapsesparkstep.md).
+
+If you're looking for guidance on how to use Azure Synapse Analytics with a Synapse workspace, see the [Azure Synapse Analytics get started series](../synapse-analytics/get-started.md).
>[!IMPORTANT] > The Azure Machine Learning and Azure Synapse Analytics integration is in preview. The capabilities presented in this article employ the `azureml-synapse` package which contains [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features that may change at any time.
The Azure Synapse Analytics integration with Azure Machine Learning (preview) al
## Prerequisites
+* The [Azure Machine Learning Python SDK installed](/python/api/overview/azure/ml/install).
+ * [Create an Azure Machine Learning workspace](how-to-manage-workspace.md?tabs=python). * [Create an Azure Synapse Analytics workspace in Azure portal](../synapse-analytics/quickstart-create-workspace.md).
To retrieve and use an existing linked service requires **User or Contributor**
View all the linked services associated with your machine learning workspace. ```python
+from azureml.core import LinkedService
+ LinkedService.list(ws) ``` This example retrieves an existing linked service, `synapselink1`, from the workspace, `ws`, with the [`get()`](/python/api/azureml-core/azureml.core.linkedservice#get-workspace--name-) method. ```python
+from azureml.core import LinkedService
+ linked_service = LinkedService.get(ws, 'synapselink1') ```
Once you retrieve the linked service, attach a Synapse Apache Spark pool as a de
You can attach Apache Spark pools via, * Azure Machine Learning studio * [Azure Resource Manager (ARM) templates](https://github.com/Azure/azure-quickstart-templates/blob/master/101-machine-learning-linkedservice-create/azuredeploy.json)
-* The Python SDK
+* The Azure Machine Learning Python SDK
### Attach a pool via the studio Follow these steps:
Follow these steps:
You can also employ the **Python SDK** to attach an Apache Spark pool. The follow code,
-1. Configures the SynapseCompute with,
+1. Configures the [`SynapseCompute`](/python/api/azureml-core/azureml.core.compute.synapsecompute) with,
- 1. The LinkedService, `linked_service` that you either created or retrieved in the previous step.
+ 1. The [`LinkedService`](/python/api/azureml-core/azureml.core.linkedservice), `linked_service` that you either created or retrieved in the previous step.
1. The type of compute target you want to attach, `SynapseSpark` 1. The name of the Apache Spark pool. This must match an existing Apache Spark pool that is in your Azure Synapse Analytics workspace.
-1. Creates a machine learning ComputeTarget by passing in,
+1. Creates a machine learning [`ComputeTarget`](/python/api/azureml-core/azureml.core.computetarget) by passing in,
1. The machine learning workspace you want to use, `ws` 1. The name you'd like to refer to the compute within the Azure Machine Learning workspace. 1. The attach_configuration you specified when configuring your Synapse Compute.
from azureml.core.compute import SynapseCompute, ComputeTarget
attach_config = SynapseCompute.attach_configuration(linked_service, #Linked synapse workspace alias type='SynapseSpark', #Type of assets to attach
- pool_name="<Synapse Spark pool name>") #Name of Synapse spark pool
+ pool_name=synapse_spark_pool_name) #Name of Synapse spark pool
synapse_compute = ComputeTarget.attach(workspace= ws,
- name="<Synapse Spark pool alias in Azure ML>",
- attach_configuration=attach_config
+ name= synapse_compute_name,
+ attach_configuration= attach_config
) synapse_compute.wait_for_completion()
Verify the Apache Spark pool is attached.
ws.compute_targets['Synapse Spark pool alias'] ```
-## Launch Synapse Spark pool for data preparation tasks
+## Launch Synapse Spark pool for data wrangling tasks
To begin data preparation with the Apache Spark pool, specify the Apache Spark pool name:
You can also get an existing registered dataset in your workspace and perform da
The following example authenticates to the workspace, gets a registered TabularDataset, `blob_dset`, that references files in blob storage, and converts it into a spark dataframe. When you convert your datasets into a spark dataframe, you can leverage `pyspark` data exploration and preparation libraries. ``` python- %%synapse+ from azureml.core import Workspace, Dataset subscription_id = "<enter your subscription ID>"
The following code, expands upon the HDFS example in the previous section and fi
```python %%synapse+ from pyspark.sql.functions import col, desc df.filter(col('Survived') == 1).groupBy('Age').count().orderBy(desc('count')).show(10)
train_ds = Dataset.File.from_files(path=datastore_paths, validate=True)
input1 = train_ds.as_mount() ```
+## Use a `ScriptRunConfig` to submit an experiment run to a Synapse Spark pool
-## Example notebooks
+You can also [leverage the Synapse spark cluster you attached previously](#attach-a-pool-with-the-python-sdk) as a compute target for submitting an experiment run with a [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) object.
+
+```Python
+from azureml.core import RunConfiguration
+from azureml.core import ScriptRunConfig
+from azureml.core import Experiment
+
+run_config = RunConfiguration(framework="pyspark")
+run_config.target = synapse_compute_name
-Once your data is prepared, learn how to [leverage a Synase spark cluster as a compute target for model training](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_job_on_synapse_spark_pool.ipynb).
+run_config.spark.configuration["spark.driver.memory"] = "1g"
+run_config.spark.configuration["spark.driver.cores"] = 2
+run_config.spark.configuration["spark.executor.memory"] = "1g"
+run_config.spark.configuration["spark.executor.cores"] = 1
+run_config.spark.configuration["spark.executor.instances"] = 1
+
+run_config.environment.python.conda_dependencies = conda_dep
+
+script_run_config = ScriptRunConfig(source_directory = './code',
+ script= 'dataprep.py',
+ arguments = ["--tabular_input", input1,
+ "--file_input", input2,
+ "--output_dir", output],
+ run_config = run_config)
+```
+
+Once your `ScriptRunConfig` object is set up, you can submit the run.
+
+```python
+from azureml.core import Experiment
+
+exp = Experiment(workspace=ws, name="synapse-spark")
+run = exp.submit(config=script_run_config)
+run
+```
+For additional details, like the `dataprep.py` script used in this example, see the [example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb).
+
+## Example notebooks
-See this [example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb) for additional concepts and demonstrations of the Azure Synapse Analytics and Azure Machine Learning integration capabilities.
+See this [example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb) for more concepts and demonstrations of the Azure Synapse Analytics and Azure Machine Learning integration capabilities.
## Next steps
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
In this article, you learn how to create a linked service that links your [Azure Synapse Analytics](/azure/synapse-analytics/overview-what-is) workspace and [Azure Machine Learning workspace](concept-workspace.md).
-With your Azure Machine Learning workspace linked with your Azure Synapse workspace, you can attach an Apache Spark pool as a dedicated compute for data wrangling at scale and conduct model training from the same notebook.
+With your Azure Machine Learning workspace linked with your Azure Synapse workspace, you can attach an Apache Spark pool as a dedicated compute for data wrangling at scale or conduct model training all from the same Python notebook.
You can link your ML workspace and Synapse workspace via the [Python SDK](#link-sdk) or the [Azure Machine Learning studio](#link-studio).
machine-learning How To Machine Learning Interpretability Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-aml.md
# Use the interpretability package to explain ML models & predictions in Python (preview) -- In this how-to guide, you learn to use the interpretability package of the Azure Machine Learning Python SDK to perform the following tasks:
The following example shows how you can use the `ExplanationClient` class to ena
## Visualizations
-After you download the explanations in your local Jupyter Notebook, you can use the visualization dashboard to understand and interpret your model. To load the visualization dashboard widget in your Jupyter Notebook, use the following code:
+After you download the explanations in your local Jupyter Notebook, you can use the visualizations in the explanations dashboard to understand and interpret your model. To load the explanations dashboard widget in your Jupyter Notebook, use the following code:
```python from interpret_community.widget import ExplanationDashboard
from interpret_community.widget import ExplanationDashboard
ExplanationDashboard(global_explanation, model, datasetX=x_test) ```
-The visualization supports explanations on both engineered and raw features. Raw explanations are based on the features from the original dataset and engineered explanations are based on the features from the dataset with feature engineering applied.
+The visualizations support explanations on both engineered and raw features. Raw explanations are based on the features from the original dataset and engineered explanations are based on the features from the dataset with feature engineering applied.
When attempting to interpret a model with respect to the original dataset it is recommended to use raw explanations as each feature importance will correspond to a column from the original dataset. One scenario where engineered explanations might be useful is when examining the impact of individual categories from a categorical feature. If a one-hot encoding is applied to a categorical feature, then the resulting engineered explanations will include a different importance value per category, one per one-hot engineered feature. This can be useful when narrowing down which part of the dataset is most informative to the model.
The fourth tab of the explanation tab lets you drill into an individual datapoin
### Visualization in Azure Machine Learning studio
-If you complete the [remote interpretability](how-to-machine-learning-interpretability-aml.md#generate-feature-importance-values-via-remote-runs) steps (uploading generated explanation to Azure Machine Learning Run History), you can view the visualization dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is a simpler version of the visualization dashboard explained above. What-If datapoint generation and ICE plots are disabled as there is no active compute in Azure Machine Learning studio that can perform their real time computations.
+If you complete the [remote interpretability](how-to-machine-learning-interpretability-aml.md#generate-feature-importance-values-via-remote-runs) steps (uploading generated explanations to Azure Machine Learning Run History), you can view the visualizations on the explanations dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is a simpler version of the dashboard widget thats generated within your Jupyter notebook. What-If datapoint generation and ICE plots are disabled as there is no active compute in Azure Machine Learning studio that can perform their real time computations.
If the dataset, global, and local explanations are available, data populates all of the tabs. If only a global explanation is available, the Individual feature importance tab will be disabled.
-Follow one of these paths to access the visualization dashboard in Azure Machine Learning studio:
+Follow one of these paths to access the explanations dashboard in Azure Machine Learning studio:
* **Experiments** pane (Preview) 1. Select **Experiments** in the left pane to see a list of experiments that you've run on Azure Machine Learning.
Follow one of these paths to access the visualization dashboard in Azure Machine
* **Models** pane 1. If you registered your original model by following the steps in [Deploy models with Azure Machine Learning](./how-to-deploy-and-where.md), you can select **Models** in the left pane to view it.
- 1. Select a model, and then the **Explanations** tab to view the explanation visualization dashboard.
+ 1. Select a model, and then the **Explanations** tab to view the explanations dashboard.
## Interpretability at inference time
machine-learning How To Machine Learning Interpretability Automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability-automl.md
Last updated 07/09/2020
# Interpretability: model explanations in automated machine learning (preview) -
-In this article, you learn how to get explanations for automated machine learning (AutoML) in Azure Machine Learning. AutoML helps you understand feature importance of the models that are generated.
+In this article, you learn how to get explanations for automated machine learning (automated ML) in Azure Machine Learning using the Python SDK. Automated ML helps you understand feature importance of the models that are generated.
All SDK versions after 1.0.85 set `model_explainability=True` by default. In SDK version 1.0.85 and earlier versions users need to set `model_explainability=True` in the `AutoMLConfig` object in order to use model interpretability. + In this article, you learn how to: - Perform interpretability during training for best model or any model.
In this article, you learn how to:
## Prerequisites - Interpretability features. Run `pip install azureml-interpret` to get the necessary package.-- Knowledge of building AutoML experiments. For more information on how to use the Azure Machine Learning SDK, complete this [regression model tutorial](tutorial-auto-train-models.md) or see how to [configure AutoML experiments](how-to-configure-auto-train.md).
+- Knowledge of building automated ML experiments. For more information on how to use the Azure Machine Learning SDK, complete this [regression model tutorial](tutorial-auto-train-models.md) or see how to [configure automated ML experiments](how-to-configure-auto-train.md).
## Interpretability during training for the best model
automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_t
### Initialize the Mimic Explainer for feature importance
-To generate an explanation for AutoML models, use the `MimicWrapper` class. You can initialize the MimicWrapper with these parameters:
+To generate an explanation for automated ML models, use the `MimicWrapper` class. You can initialize the MimicWrapper with these parameters:
- The explainer setup object - Your workspace-- A surrogate model to explain the `fitted_model` AutoML model
+- A surrogate model to explain the `fitted_model` automated ML model
The MimicWrapper also takes the `automl_run` object where the engineered explanations will be uploaded.
explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator,
### Use Mimic Explainer for computing and visualizing engineered feature importance
-You can call the `explain()` method in MimicWrapper with the transformed test samples to get the feature importance for the generated engineered features. You can also sign in to [Azure Machine Learning Studio](https://ml.azure.com/) to view the dashboard visualization of the feature importance values of the generated engineered features by AutoML featurizers.
+You can call the `explain()` method in MimicWrapper with the transformed test samples to get the feature importance for the generated engineered features. You can also sign in to [Azure Machine Learning studio](https://ml.azure.com/) to view the explanations dashboard visualization of the feature importance values of the generated engineered features by automated ML featurizers.
```python engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform)
ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipelin
### Use Mimic Explainer for computing and visualizing raw feature importance
-You can call the `explain()` method in MimicWrapper with the transformed test samples to get the feature importance for the raw features. In [Machine Learning Studio](https://ml.azure.com/), you can view the dashboard visualization of the feature importance values of the raw features.
+You can call the `explain()` method in MimicWrapper with the transformed test samples to get the feature importance for the raw features. In the [Machine Learning studio](https://ml.azure.com/), you can view the dashboard visualization of the feature importance values of the raw features.
```python raw_explanations = explainer.explain(['local', 'global'], get_raw=True,
print(raw_explanations.get_feature_importance_dict())
## Interpretability during inference
-In this section, you learn how to operationalize an AutoML model with the explainer that was used to compute the explanations in the previous section.
+In this section, you learn how to operationalize an automated ML model with the explainer that was used to compute the explanations in the previous section.
### Register the model and the scoring explainer
if service.state == 'Healthy':
print('raw_local_importance_values:\n{}\n'.format(output['raw_local_importance_values'])) ```
-### Visualize to discover patterns in data and explanations at training time
+## Visualize to discover patterns in data and explanations at training time
-You can visualize the feature importance chart in your workspace in [Machine Learning Studio](https://ml.azure.com). After your AutoML run is complete, select **View model details** to view a specific run. Select the **Explanations** tab to see the explanation visualization dashboard.
+You can visualize the feature importance chart in your workspace in [Azure Machine Learning studio](https://ml.azure.com). After your AutoML run is complete, select **View model details** to view a specific run. Select the **Explanations** tab to see the visualizations in the explanation dashboard.
[![Machine Learning Interpretability Architecture](./media/how-to-machine-learning-interpretability-automl/automl-explanation.png)](./media/how-to-machine-learning-interpretability-automl/automl-explanation.png#lightbox)
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-interpretability.md
You can run explanation remotely on Azure Machine Learning Compute and log the e
## Next steps - See the [how-to](how-to-machine-learning-interpretability-aml.md) for enabling interpretability for models training both locally and on Azure Machine Learning remote compute resources.
+- Learn how to enable [interpretability for automated machine learning models](how-to-machine-learning-interpretability-automl.md).
- See the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model) for additional scenarios. - If you're interested in interpretability for text scenarios, see [Interpret-text](https://github.com/interpretml/interpret-text), a related open source repo to [Interpret-Community](https://github.com/interpretml/interpret-community/), for interpretability techniques for NLP. `azureml.interpret` package does not currently support these techniques but you can get started with an [example notebook on text classification](https://github.com/interpretml/interpret-text/blob/master/notebooks/text_classification/text_classification_classical_text_explainer.ipynb).
machine-learning How To Understand Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-understand-automated-ml.md
In this example, note that the better model has a predicted vs. true line that i
## Model explanations and feature importances
-While model evaluation metrics and charts are good for measuring the general quality of a model, inspecting which dataset features a model used to make its predictions is essential when practicing responsible AI. That's why automated ML provides a model interpretability dashboard to measure and report the relative contributions of dataset features.
-
-To view the interpretability dashboard in the studio:
-1. [Sign into the studio](https://ml.azure.com/) and navigate to your workspace
-2. In the left menu, select **Experiments**
-3. Select your experiment from the list of experiments
-4. In the table at the bottom of the page, select an AutoML run
-5. In the **Models** tab, select the **Algorithm name** for the model you want to explain
-6. In the **Explanations** tab, you may see an explanation was already created if the model was the best
-7. To create a new explanation, select **Explain model** and select the remote compute with which to compute explanations
-
-[Learn more about model explanations in automated ML](how-to-machine-learning-interpretability-automl.md).
+While model evaluation metrics and charts are good for measuring the general quality of a model, inspecting which dataset features a model used to make its predictions is essential when practicing responsible AI. That's why automated ML provides a model explanations dashboard to measure and report the relative contributions of dataset features. See how to [view the explanations dashboard in the Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#model-explanations-preview).
+
+For a code first experience, see how to set up [model explanations for automated ML experiments with the Azure Machine Learning Python SDK](how-to-machine-learning-interpretability-automl.md).
> [!NOTE] > The ForecastTCN model is not currently supported by automated ML explanations and other forecasting models may have limited access to interpretability tools.
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
-+ Last updated 12/20/2020
Drill down on any of the completed models to see training run details, like a mo
[![Iteration details](media/how-to-use-automated-ml-for-ml-models/iteration-details.png)](media/how-to-use-automated-ml-for-ml-models/iteration-details-expanded.png)
-## Model explanations
+## Model explanations (preview)
-To better understand your model, see which data features (raw or engineered) influenced the model's predictions with the model explanations dashboard.
+To better understand your model, you can see which data features (raw or engineered) influenced the model's predictions with the model explanations dashboard.
-The model explanations dashboard provides an overall analysis of the trained model along with its predictions and explanations. It also lets you drill into an individual data point and its individual feature importances. [Learn more about the explanation dashboard visualizations and specific plots](how-to-machine-learning-interpretability-aml.md#visualizations).
+The model explanations dashboard provides an overall analysis of the trained model along with its predictions and explanations. It also lets you drill into an individual data point and its individual feature importances. [Learn more about the explanation dashboard visualizations](how-to-machine-learning-interpretability-aml.md#visualizations).
To get explanations for a particular model,
-1. On the **Models** tab, select the model you want to use.
-1. Select the **Explain model** button and provide a compute that can be used to generate the explanations.
+1. On the **Models** tab, select the model you want to understand.
+1. Select the **Explain model** button, and provide a compute that can be used to generate the explanations.
1. Check the **Child runs** tab for the status. 1. Once complete, navigate to the **Explanations (preview)** tab which contains the explanations dashboard.
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-synapsesparkstep.md
This "data preparation" script doesn't do any real data transformation, but illu
## Use the `SynapseSparkStep` in a pipeline
-Other steps in the pipeline may have their own unique environments and run on different compute resources appropriate to the task at hand. The sample notebook runs the "training step" on a small CPU cluster:
+The following example uses the output from the `SynapseSparkStep` created in the [previous section](#create-a-synapsesparkstep-that-uses-the-linked-apache-spark-pool). Other steps in the pipeline may have their own unique environments and run on different compute resources appropriate to the task at hand. The sample notebook runs the "training step" on a small CPU cluster:
```python from azureml.core.compute import AmlCompute
machine-learning Tutorial Labeling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-labeling.md
Image labels can be exported in [COCO format](http://cocodataset.org/#format-dat
## Next steps > [!div class="nextstepaction"]
-> [Create a data labeling project and export labels](how-to-create-labeling-projects.md).
+> [Train a machine learning image recognition model](/azure/machine-learning/how-to-use-labeled-dataset).
+
media-services Security Private Link How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/security-private-link-how-to.md
+
+ Title: Create a Media Services and Storage account with a private link
+
+description: Create a Media Services account and Storage Account with Private Links to a VNet. The Azure Resource Manager (ARM) template also sets up DNS for both the Private Links. Finally the template creates a VM to allow the user to try out the Private Links.
+++++ Last updated : 04/15/2021+++
+# Create a Media Services and Storage account with a Private Link
++
+Create a Media Services account and Storage Account with Private Links to a VNet. The Azure Resource Manager (ARM) template also sets up DNS for both the Private Links. Finally the template creates a VM to allow the user to try out the Private Links.
+
+## Prerequisites
+
+Read [Quickstart: Create and deploy ARM templates by using the Azure portal](https://docs.microsoft.com/azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal).
+
+## Limitations
+
+- For Media Services, the template only sets up Private Link for Key Delivery.
+- A network security group isn't created for the VM.
+- Network access control isn't configured for the Storage Account or Key Delivery.
+
+The template creates:
+
+- A Media Services account and a Storage Account (as normal)
+- A VNet with a subnet
+- For both the Media Services account and the Storage Account:
+ - Private Endpoints
+ - Private DNS Zones
+ - Links between links (to connect the private DNS zones to the VNet)
+ - Private DNS zone groups (to trigger the automatic creation of DNS records in the private DNS zones)
+- A VM (with associated public IP address and network interface)
+
+## Azure Resource Manager (ARM) template for private link
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "type": "string"
+ },
+ "vmAdminUsername": {
+ "type": "string"
+ },
+ "vmAdminPassword": {
+ "type": "secureString"
+ },
+ "vmSize": {
+ "type": "string",
+ "defaultValue": "Standard_D2_v3"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "storageAccountName": {
+ "type": "string"
+ },
+ "mediaServicesAccountName": {
+ "type": "string"
+ }
+ },
+ "functions": [],
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-01-01",
+ "name": "[parameters('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2"
+ },
+ {
+ "type": "Microsoft.Media/mediaservices",
+ "apiVersion": "2020-05-01",
+ "name": "[parameters('mediaServicesAccountName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "storageAccounts": [
+ {
+ "type": "Primary",
+ "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ }
+ ]
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2020-08-01",
+ "name": "myVnet",
+ "location": "[parameters('location')]",
+ "properties": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "10.0.0.0/16"
+ ]
+ },
+ "subnets": [
+ {
+ "name": "mySubnet",
+ "properties": {
+ "addressPrefix": "10.0.0.0/24",
+ "privateEndpointNetworkPolicies": "Disabled"
+ }
+ }
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.Network/privateEndpoints",
+ "apiVersion": "2020-08-01",
+ "name": "storagePrivateEndpoint",
+ "location": "[parameters('location')]",
+ "properties": {
+ "subnet": {
+ "id": "[reference(resourceId('Microsoft.Network/virtualNetworks', 'myVnet')).subnets[0].id]"
+ },
+ "privateLinkServiceConnections": [
+ {
+ "name": "storagePrivateEndpointConnection",
+ "properties": {
+ "privateLinkServiceId": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]",
+ "groupIds": [
+ "blob"
+ ]
+ }
+ }
+ ]
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]",
+ "[resourceId('Microsoft.Network/virtualNetworks', 'myVnet')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Network/privateDnsZones",
+ "apiVersion": "2020-06-01",
+ "name": "privatelink.blob.core.windows.net",
+ "location": "global"
+ },
+ {
+ "type": "Microsoft.Network/privateDnsZones/virtualNetworkLinks",
+ "apiVersion": "2020-06-01",
+ "name": "[format('{0}/storageDnsZoneLink', 'privatelink.blob.core.windows.net')]",
+ "location": "global",
+ "properties": {
+ "registrationEnabled": false,
+ "virtualNetwork": {
+ "id": "[resourceId('Microsoft.Network/virtualNetworks', 'myVnet')]"
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/privateDnsZones', 'privatelink.blob.core.windows.net')]",
+ "[resourceId('Microsoft.Network/virtualNetworks', 'myVnet')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Network/privateEndpoints/privateDnsZoneGroups",
+ "apiVersion": "2020-08-01",
+ "name": "[format('{0}/storagePrivateDnsZoneGroup', 'storagePrivateEndpoint')]",
+ "properties": {
+ "privateDnsZoneConfigs": [
+ {
+ "name": "config1",
+ "properties": {
+ "privateDnsZoneId": "[resourceId('Microsoft.Network/privateDnsZones', 'privatelink.blob.core.windows.net')]"
+ }
+ }
+ ]
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/privateDnsZones', 'privatelink.blob.core.windows.net')]",
+ "[resourceId('Microsoft.Network/privateEndpoints', 'storagePrivateEndpoint')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Network/privateEndpoints",
+ "apiVersion": "2020-08-01",
+ "name": "mediaServicesPrivateEndpoint",
+ "location": "[parameters('location')]",
+ "properties": {
+ "subnet": {
+ "id": "[reference(resourceId('Microsoft.Network/virtualNetworks', 'myVnet')).subnets[0].id]"
+ },
+ "privateLinkServiceConnections": [
+ {
+ "name": "mediaServicesPrivateEndpointConnection",
+ "properties": {
+ "privateLinkServiceId": "[resourceId('Microsoft.Media/mediaservices', parameters('mediaServicesAccountName'))]",
+ "groupIds": [
+ "keydelivery"
+ ]
+ }
+ }
+ ]
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Media/mediaservices', parameters('mediaServicesAccountName'))]",
+ "[resourceId('Microsoft.Network/virtualNetworks', 'myVnet')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Network/privateDnsZones",
+ "apiVersion": "2020-06-01",
+ "name": "privatelink.media.azure.net",
+ "location": "global"
+ },
+ {
+ "type": "Microsoft.Network/privateDnsZones/virtualNetworkLinks",
+ "apiVersion": "2020-06-01",
+ "name": "[format('{0}/mediaServicesDnsZoneLink', 'privatelink.media.azure.net')]",
+ "location": "global",
+ "properties": {
+ "registrationEnabled": false,
+ "virtualNetwork": {
+ "id": "[resourceId('Microsoft.Network/virtualNetworks', 'myVnet')]"
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/privateDnsZones', 'privatelink.media.azure.net')]",
+ "[resourceId('Microsoft.Network/virtualNetworks', 'myVnet')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Network/privateEndpoints/privateDnsZoneGroups",
+ "apiVersion": "2020-08-01",
+ "name": "[format('{0}/mediaServicesPrivateDnsZoneGroup', 'mediaServicesPrivateEndpoint')]",
+ "properties": {
+ "privateDnsZoneConfigs": [
+ {
+ "name": "config1",
+ "properties": {
+ "privateDnsZoneId": "[resourceId('Microsoft.Network/privateDnsZones', 'privatelink.media.azure.net')]"
+ }
+ }
+ ]
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/privateDnsZones', 'privatelink.media.azure.net')]",
+ "[resourceId('Microsoft.Network/privateEndpoints', 'mediaServicesPrivateEndpoint')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Network/publicIPAddresses",
+ "apiVersion": "2020-08-01",
+ "name": "publicIp",
+ "location": "[parameters('location')]",
+ "properties": {
+ "publicIPAllocationMethod": "Dynamic",
+ "dnsSettings": {
+ "domainNameLabel": "[toLower(parameters('vmName'))]"
+ }
+ }
+ },
+ {
+ "type": "Microsoft.Network/networkInterfaces",
+ "apiVersion": "2020-08-01",
+ "name": "vmNetworkInterface",
+ "location": "[parameters('location')]",
+ "properties": {
+ "ipConfigurations": [
+ {
+ "name": "ipConfig1",
+ "properties": {
+ "privateIPAllocationMethod": "Dynamic",
+ "publicIPAddress": {
+ "id": "[resourceId('Microsoft.Network/publicIPAddresses', 'publicIp')]"
+ },
+ "subnet": {
+ "id": "[reference(resourceId('Microsoft.Network/virtualNetworks', 'myVnet')).subnets[0].id]"
+ }
+ }
+ }
+ ]
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/publicIPAddresses', 'publicIp')]",
+ "[resourceId('Microsoft.Network/virtualNetworks', 'myVnet')]"
+ ]
+ },
+ {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2020-12-01",
+ "name": "myVM",
+ "location": "[parameters('location')]",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "[parameters('vmSize')]"
+ },
+ "osProfile": {
+ "computerName": "[parameters('vmName')]",
+ "adminUsername": "[parameters('vmAdminUsername')]",
+ "adminPassword": "[parameters('vmAdminPassword')]"
+ },
+ "storageProfile": {
+ "imageReference": {
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "2019-Datacenter",
+ "version": "latest"
+ },
+ "osDisk": {
+ "name": "osDisk",
+ "caching": "ReadWrite",
+ "createOption": "FromImage",
+ "managedDisk": {
+ "storageAccountType": "Standard_LRS"
+ },
+ "diskSizeGB": 128
+ }
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
+ "id": "[resourceId('Microsoft.Network/networkInterfaces', 'vmNetworkInterface')]"
+ }
+ ]
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/networkInterfaces', 'vmNetworkInterface')]"
+ ]
+ }
+ ],
+ "metadata": {
+ "_generator": {
+ "name": "bicep",
+ "version": "0.3.126.58533",
+ "templateHash": "2006367938138350540"
+ }
+ }
+}
+```
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Flow logs include the following properties:
* **Traffic Flow** - The direction of the traffic flow. Valid values are **I** for inbound and **O** for outbound. * **Traffic Decision** - Whether traffic was allowed or denied. Valid values are **A** for allowed and **D** for denied. * **Flow State - Version 2 Only** - Captures the state of the flow. Possible states are **B**: Begin, when a flow is created. Statistics aren't provided. **C**: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals. **E**: End, when a flow is terminated. Statistics are provided.
- * **Packets - Source to destination - Version 2 Only** The total number of TCP or UDP packets sent from source to destination since last update.
- * **Bytes sent - Source to destination - Version 2 Only** The total number of TCP or UDP packet bytes sent from source to destination since last update. Packet bytes include the packet header and payload.
- * **Packets - Destination to source - Version 2 Only** The total number of TCP or UDP packets sent from destination to source since last update.
- * **Bytes sent - Destination to source - Version 2 Only** The total number of TCP and UDP packet bytes sent from destination to source since last update. Packet bytes include packet header and payload.
+ * **Packets - Source to destination - Version 2 Only** The total number of TCP packets sent from source to destination since last update.
+ * **Bytes sent - Source to destination - Version 2 Only** The total number of TCP packet bytes sent from source to destination since last update. Packet bytes include the packet header and payload.
+ * **Packets - Destination to source - Version 2 Only** The total number of TCP packets sent from destination to source since last update.
+ * **Bytes sent - Destination to source - Version 2 Only** The total number of TCP packet bytes sent from destination to source since last update. Packet bytes include packet header and payload.
**NSG flow logs Version 2 (vs Version 1)**
notification-hubs Notification Hubs Python Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-python-push-notification-tutorial.md
First, let use define a class representing a notification.
```python class Notification: def __init__(self, notification_format=None, payload=None, debug=0):
- valid_formats = ['template', 'apple', 'fcm',
+ valid_formats = ['template', 'apple', 'gcm',
'windows', 'windowsphone', "adm", "baidu"] if not any(x in notification_format for x in valid_formats): raise Exception( "Invalid Notification format. " +
- "Must be one of the following - 'template', 'apple', 'fcm', 'windows', 'windowsphone', 'adm', 'baidu'")
+ "Must be one of the following - 'template', 'apple', 'gcm', 'windows', 'windowsphone', 'adm', 'baidu'")
self.format = notification_format self.payload = payload
def make_http_request(self, url, payload, headers):
def send_notification(self, notification, tag_or_tag_expression=None): url = self.Endpoint + self.HubName + '/messages' + self.API_VERSION
- json_platforms = ['template', 'apple', 'fcm', 'adm', 'baidu']
+ json_platforms = ['template', 'apple', 'gcm', 'adm', 'baidu']
if any(x in notification.format for x in json_platforms): content_type = "application/json"
def send_apple_notification(self, payload, tags=""):
self.send_notification(nh, tags)
-def send_fcm_notification(self, payload, tags=""):
- nh = Notification("fcm", payload)
+def send_google_notification(self, payload, tags=""):
+ nh = Notification("gcm", payload)
self.send_notification(nh, tags)
hub.send_apple_notification(alert_payload)
### Android ```python
-fcm_payload = {
+gcm_payload = {
'data': { 'msg': 'Hello!' } }
-hub.send_fcm_notification(fcm_payload)
+hub.send_google_notification(gcm_payload)
``` ### Kindle Fire
notification-hubs Xamarin Notification Hubs Push Notifications Android Gcm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/xamarin-notification-hubs-push-notifications-android-gcm.md
Your notification hub is configured to work with FCM, and you have the connectio
6. Add the following using statements to `MainActivity.cs`: ```csharp
- using Azure.Messaging.NotificationHubs;
+ using WindowsAzure.Messaging.NotificationHubs;
``` 7. Add the following properties to the MainActivity class:
In this tutorial, you sent broadcast notifications to all your Android devices r
[Notification Hubs How-To for Android]: /previous-versions/azure/dn282661(v=azure.100) [Use Notification Hubs to push notifications to users]: notification-hubs-aspnet-backend-ios-apple-apns-notification.md [Use Notification Hubs to send breaking news]: notification-hubs-windows-notification-dotnet-push-xplat-segmented-wns.md
-[GitHub]: https://github.com/Azure/azure-notificationhubs-android
+[GitHub]: https://github.com/Azure/azure-notificationhubs-android
purview Concept Resource Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-resource-sets.md
Qualified name: `https://myblob.blob.core.windows.net/sample-data/data{N}.csv`
Display name: "data"
-## Known Issues with resource sets
+## Customizing resource set grouping using pattern rules
-Although resource sets work well in most cases, you might encounter the following issues, in which Azure Purview:
+hen scanning a storage account, Azure Purview uses a set of defined patterns to determine if a group of assets is a resource set. In some cases, Azure Purview's resource set grouping may not accurately reflect your data estate. These issues can include:
-- Incorrectly marks an asset as a resource set-- Puts an asset into the wrong resource set-- Incorrectly marks an asset as not being a resource set
+- Incorrectly marking an asset as a resource set
+- Putting an asset into the wrong resource set
+- Incorrectly marking an asset as not being a resource set
+To customize or override how Azure Purview detects which assets are grouped as resource sets and how they are displayed within the catalog, you can define pattern rules in the management center. For step-by-step instructions and syntax, please see [resource set pattern rules](how-to-resource-set-pattern-rules.md).
## Next steps To get started with Azure Purview, see [Quickstart: Create an Azure Purview account](create-catalog-portal.md).
purview Create Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-purview-dotnet.md
Next, create a C# .NET console application in Visual Studio:
1. Launch **Visual Studio**. 2. In the Start window, select **Create a new project** > **Console App (.NET Framework)**. .NET version 4.5.2 or above is required.
-3. In **Project name**, enter **ADFv2QuickStart**.
+3. In **Project name**, enter **PurviewQuickStart**.
4. Select **Create** to create the project. ## Install NuGet packages
purview How To Resource Set Pattern Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-resource-set-pattern-rules.md
+
+ Title: How to create resource set pattern rules
+description: Learn how to create a resource set pattern rule to overwrite how assets get grouped into resource sets
+++++ Last updated : 04/15/2021++
+# Create resource set pattern rules
+
+At-scale data processing systems typically store a single table on a disk as multiple files. This concept is represented in Azure Purview by using resource sets. A resource set is a single object in the data catalog that represents a large number of assets in storage. To learn more, see [Understanding resource sets](concept-resource-sets.md).
+
+When scanning a storage account, Azure Purview uses a set of defined patterns to determine if a group of assets is a resource set. In some cases, Azure Purview's resource set grouping may not accurately reflect your data estate. Resource set pattern rules allow you to customize or override how Azure Purview detects which assets are grouped as resource sets and how they are displayed within the catalog.
+
+Pattern rules are currently supported in the following source types:
+- Azure Data Lake Storage Gen2
+- Azure Blob Storage
+- Azure Files
++
+## How to create a resource set pattern rule
+
+Follow the steps below to create a new resource set pattern rule:
+
+1. Go to the management center. Select **Pattern rules** from the menu under the Resource sets heading. Select **+ New** to create a new rule set.
+
+ :::image type="content" source="media/how-to-resource-set-pattern-rules/create-new-scoped-resource-set-rule.png" alt-text="Create new resource set pattern rule" border="true":::
+
+1. Enter the scope of your resource set pattern rule. Select your storage account type and the name of the storage account you wish to create a rule set on. Each set of rules is applied relative to a folder path scope specified in the **Folder path** field.
+
+ :::image type="content" source="media/how-to-resource-set-pattern-rules/create-new-scoped-resource-set-scope.png" alt-text="Create resource set pattern rule configurations" border="true":::
+
+1. To enter a rule for a configuration scope, select **+ New Rule**.
+
+1. Enter in the following fields to create a rule:
+
+ 1. **Rule name:** The name of the configuration rule. This field has no effect on the assets the rule applies to.
+
+ 1. **Qualified name:** A qualified path that uses a combination of text, dynamic replacers, and static replacers to match assets to the configuration rule. This path is relative to the scope of the configuration rule. See the [syntax](#syntax) section below for detailed instructions on how to specify qualified names.
+
+ 1. **Display name:** The display name of the asset. This field is optional. Use plain text and static replacers to customize how an asset is displayed in the catalog. For more detailed instructions, see the [syntax](#syntax) section below.
+
+ 1. **Do not group as resource set:** If enabled, matched resource won't be grouped into a resource set.
+
+ :::image type="content" source="media/how-to-resource-set-pattern-rules/scoped-resource-set-rule-example.png" alt-text="Create new configuration rule." border="true":::
+
+1. Save the rule by clicking **Add**.
+
+> [!NOTE]
+> After a pattern rule is created, all new scans will apply the rule during ingestion. Existing assets in the data catalog will be updated via a background process which can take up to a few hours.
+
+## <a name="syntax"></a> Pattern rule syntax
+
+When creating resource set pattern rules, use the following syntax to specify which assets rules apply to.
+
+### Dynamic replacers (single brackets)
+
+Single brackets are used as **dynamic replacers** in a pattern rules. Specify a dynamic replacer in the qualified name using format `{<replacerName:<replacerType>}`. If matched, dynamic replacers are used as a grouping condition that indicate assets should be represented as a resource set. If the assets are grouped into a resource set, the resource set qualified path would contain `{replacerName}` where the replacer was specified.
+
+For example, If two assets `folder1/file-1.csv` and `folder2/file-2.csv` matched to rule `{folder:string}/file-{NUM:int}.csv`, the resource set would be a single entity `{folder}/file-{NUM}.csv`.
+
+#### Special case: Dynamic replacers when not grouping into resource set
+
+If *Do not group as resource set* is enabled for a pattern rule, the replacer name is an optional field. `{:<replacerType>}` is valid syntax. For example, `file-{:int}.csv` would successfully match for `file-1.csv` and `file-2.csv` and create two different assets instead of a resource set.
+
+### Static replacers (double brackets)
+
+Double brackets are used as **static replacers** in the qualified name of a pattern rule. Specify a static replacer in the qualified name using format `{{<replacerName>:<replacerType>}}`. If matched, each set of unique static replacer values will create different resource set groupings.
+
+For example, If two assets `folder1/file-1.csv` and `folder2/file-2.csv` matched to rule `{{folder:string}}/file-{NUM:int}.csv`, two resource sets would be created `folder1/file-{NUM}.csv` and `folder2/file-{NUM}.csv`.
+
+Static replacers can be used to specify the display name of an asset matching to a pattern rule. Using `{{<replacerName>}}` in the display name of a rule will use the matched value in the asset name.
+
+### Available replacement types
+
+Below are the available types that can be used in static and dynamic replacers:
+
+| Type | Structure |
+| - | |
+| string | A series of 1 or more Unicode characters including delimiters like spaces. |
+| int | A series of 1 or more 0-9 ASCII characters, it can be 0 prefixed (e.g. 0001). |
+| guid | A series of 32 or 8-4-4-4-12 string representation of an UUID as defineddefa in [RFC 4122](https://tools.ietf.org/html/rfc4122). |
+| date | A series of 6 or 8 0-9 ASCII characters with optionally separators: yyyymmdd, yyyy-mm-dd, yymmdd, yy-mm-dd, specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). |
+| time | A series of 4 or 6 0-9 ASCII characters with optionally separators: HHmm, HH:mm, HHmmss, HH:mm:ss specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). |
+| timestamp | A series of 12 or 14 0-9 ASCII characters with optionally separators: yyyy-mm-ddTHH:mm, yyyymmddhhmm, yyyy-mm-ddTHH:mm:ss, yyyymmddHHmmss specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). |
+| boolean | Can contain 'true' or 'false', case insensitive. |
+| number | A series of 0 or more 0-9 ASCII characters, it can be 0 prefixed (e.g. 0001) followed by optionally a dot '.' and a series of 1 or more 0-9 ASCII characters, it can be 0 postfixed (e.g. .100) |
+| hex | A series of 1 or more ASCII characters from the set 0-1 and A-F, the value can be 0 prefixed |
+| locale | A string that matches the syntax specified in [RFC 5646](https://tools.ietf.org/html/rfc5646). |
+
+## Order of resource set pattern rules getting applied
+
+Below is the order of operations for applying pattern rules:
+
+1. More specific scopes will take priority if an asset matches to two rules. For example, rules in a scope `container/folder` will apply before rules in scope `container`.
+
+1. Order of rules within a specific scope. This can be edited in the UX.
+
+1. If an asset doesn't match to any specified rule, the default resource set heuristics apply.
+
+## Examples
+
+### Example 1
+
+SAP data extraction into full and delta loads
+
+#### Inputs
+
+Files:
+
+- `https://myazureblob.blob.core.windows.net/bar/customer/full/2020/01/13/saptable_customer_20200101_20200102_01.txt`
+- `https://myazureblob.blob.core.windows.net/bar/customer/full/2020/01/13/saptable_customer_20200101_20200102_02.txt`
+- `https://myazureblob.blob.core.windows.net/bar/customer/delta/2020/01/15/saptable_customer_20200101_20200102_01.txt`
+- `https://myazureblob.blob.core.windows.net/bar/customer/full/2020/01/17/saptable_customer_20200101_20200102_01.txt`
+- `https://myazureblob.blob.core.windows.net/bar/customer/full/2020/01/17/saptable_customer_20200101_20200102_02.txt`
+
+#### Pattern rule
+
+**Scope:** `https://myazureblob.blob.core.windows.net/bar/`
+
+**Display name:** 'External Customer'
+
+**Qualified Name:** `customer/{extract:string}/{year:int}/{month:int}/{day:int}/saptable_customer_{date_from:date}_{date_to:time}_{sequence:int}.txt`
+
+**Resource Set:** true
+
+#### Output
+
+One resource set asset
+
+**Display Name:** External Customer
+
+**Qualified Name:** `https://myazureblob.blob.core.windows.net/bar/customer/{extract}/{year}/{month}/{day}/saptable_customer_{date_from}_{date_to}_{sequence}.txt`
+
+### Example 2
+
+IoT data in avro format
+
+#### Inputs
+
+Files:
+
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/01-01-2020/22:33:22-001.avro`
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/01-01-2020/22:33:22-002.avro`
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/02-01-2020/22:33:22-001.avro`
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-90/01-01-2020/22:33:22-001.avro`
+
+#### Pattern rules
+
+**Scope:** `https://myazureblob.blob.core.windows.net/bar/`
+
+Rule 1
+
+**Display name:** 'machine-89'
+
+**Qualified Name:** `raw/machinename-89/{date:date}/{time:time}-{id:int}.avro`
+
+**Resource Set:** true
+
+Rule 2
+
+**Display name:** 'machine-90'
+
+**Qualified Name:** `raw/machinename-90/{date:date}/{time:time}-{id:int}.avro`
+
+**Resource Set:** true
+
+#### Outputs
+
+2 resource sets
+
+Resource Set 1
+
+**Display Name:** machine-89
+
+**Qualified Name:** `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/{date}/{time}-{id}.avro`
+
+Resource Set 2
+
+**Display Name:** machine-90
+
+**Qualified Name:** `https://myazureblob.blob.core.windows.net/bar/raw/machinename-90/{date}/{time}-{id}.avro`
+
+### Example 3
+
+IoT data in avro format
+
+#### Inputs
+
+Files:
+
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/01-01-2020/22:33:22-001.avro`
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/01-01-2020/22:33:22-002.avro`
+- `https://myazureblob.blob.core.windows.netbar/raw/machinename-89/02-01-2020/22:33:22-001.avro`
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-90/01-01-2020/22:33:22-001.avro`
+
+#### Pattern rule
+
+**Scope:** `https://myazureblob.blob.core.windows.net/bar/`
+
+**Display name:** 'Machine-{{machineid}}'
+
+**Qualified Name:** `raw/machinename-{{machineid:int}}/{date:date}/{time:time}-{id:int}.avro`
+
+**Resource Set:** true
+
+#### Outputs
+
+Resource Set 1
+
+**Display name:** machine-89
+
+**Qualified Name:** `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/{date}/{time}-{id}.avro`
+
+Resource Set 2
+
+**Display name:** machine-90
+
+**Qualified Name:** `https://myazureblob.blob.core.windows.net/bar/raw/machinename-90/{date}/{time}-{id}.avro`
+
+### Example 4
+
+Don't group into resource sets
+
+#### Inputs
+
+Files:
+
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/01-01-2020/22:33:22-001.avro`
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/01-01-2020/22:33:22-002.avro`
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/02-01-2020/22:33:22-001.avro`
+- `https://myazureblob.blob.core.windows.net/bar/raw/machinename-90/01-01-2020/22:33:22-001.avro`
+
+#### Pattern rule
+
+**Scope:** `https://myazureblob.blob.core.windows.net/bar/`
+
+**Display name:** `Machine-{{machineid}}`
+
+**Qualified Name:** `raw/machinename-{{machineid:int}}/{{:date}}/{{:time}}-{{:int}}.avro`
+
+**Resource Set:** false
+
+#### Outputs
+
+4 individual assets
+
+Asset 1
+
+**Display name:** machine-89
+
+**Qualified Name:** `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/01-01-2020/22:33:22-001.avro`
+
+Asset 2
+
+**Display name:** machine-89
+
+**Qualified Name:** `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/01-01-2020/22:33:22-002.avro`
+
+Asset 3
+
+**Display name:** machine-89
+
+**Qualified Name:** `https://myazureblob.blob.core.windows.net/bar/raw/machinename-89/02-01-2020/22:33:22-001.avro`
+
+Asset 4
+
+**Display name:** machine-90
+
+**Qualified Name:** `https://myazureblob.blob.core.windows.net/bar/raw/machinename-90/01-01-2020/22:33:22-001.avro`
+
+## Next steps
+
+Get started by [registering and scanning an Azure Data Lake Gen2 storage account](register-scan-adls-gen2.md).
purview How To Search Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-search-catalog.md
Below are the operators that can be used to compose a search query. Operators ca
| NOT | Specifies that an asset can't contain the keyword to the right of the NOT clause | The query `hive NOT database` returns assets that contain 'hive', but not 'database'. | | () | Groups a set of keywords and operators together. When combining multiple operators, parenthesis specify the order of operations. | The query `hive AND (database OR warehouse)` returns assets that contain 'hive' and either 'database' or 'warehouse', or both. | | "" | Specifies exact content in a phrase that the query must match to. | The query `"hive database"` returns assets that contain the phrase "hive database" in their properties |
-| * | A wildcard that matches on one to many characters. Can't be the first character in a keyword. | The query `hiv\`* returns assets that have properties that starts with 'hiv' such as 'hive' or 'hive-table'. |
-| ? | A wildcard that matches on a single character. Can't be the first character in a keyword | The query `hiv?` returns assets that have properties that start with 'hiv' and are four letters such as 'hive' or 'hiva'. |
+| * | A wildcard that matches on one to many characters. Can't be the first character in a keyword. | The query `dat*` returns assets that have properties that starts with 'dat' such as 'data' or 'database'. |
+| ? | A wildcard that matches on a single character. Can't be the first character in a keyword | The query `dat?` returns assets that have properties that start with 'dat' and are four letters such as 'date' or 'data'. |
> [!Note] > Always specify Boolean operators (**AND**, **OR**, **NOT**) in all caps. Otherwise, case doesn't matter, nor do extra spaces.
purview Manage Kafka Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-kafka-dotnet.md
+
+ Title: Publish messages to and process messages from Azure Purview's Atlas Kafka topics via Event Hubs using .NET
+description: This article provides a walkthrough to create a .NET Core application that sends/receives events to/from Purview's Apache Atlas Kafka topics by using the latest Azure.Messaging.EventHubs package.
+++++
+ms.devlang: dotnet
Last updated : 04/15/2021++
+# Publish messages to and process messages from Azure Purview's Atlas Kafka topics via Event Hubs using .NET
+This quickstart shows how to send events to and receive events from Azure Purview's Atlas Kafka topics via event hub using the **Azure.Messaging.EventHubs** .NET library.
+
+> [!IMPORTANT]
+> A managed event hub is created as part of Purview account creation, see [Purview account creation](create-catalog-portal.md). You can publish messages to the event hub kafka topic ATLAS_HOOK and Purview will consume and process it. Purview will notify entity changes to event hub kafka topic ATLAS_ENTITIES and user can consume and process it.This quickstart uses the new **Azure.Messaging.EventHubs** library.
++
+## Prerequisites
+If you're new to Azure Event Hubs, see [Event Hubs overview](../event-hubs/event-hubs-about.md) before you do this quickstart.
+
+To complete this quickstart, you need the following prerequisites:
+
+- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).
+- **Microsoft Visual Studio 2019**. The Azure Event Hubs client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# language versions, but the new syntax won't be available. To make use of the full syntax, it is recommended that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using Visual Studio, versions before Visual Studio 2019 aren't compatible with the tools needed to build C# 8.0 projects. Visual Studio 2019, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).
+
+## Publish messages to Purview
+This section shows you how to create a .NET Core console application to send events to an Purview via event hub kafka topic **ATLAS_HOOK**.
+
+## Create a Visual Studio project
+
+Next, create a C# .NET console application in Visual Studio:
+
+1. Launch **Visual Studio**.
+2. In the Start window, select **Create a new project** > **Console App (.NET Framework)**. .NET version 4.5.2 or above is required.
+3. In **Project name**, enter **PurviewKafkaProducer**.
+4. Select **Create** to create the project.
+
+### Create a console application
+
+1. Start Visual Studio 2019.
+1. Select **Create a new project**.
+1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**.
+ 1. Select **C#** for the programming language.
+ 1. Select **Console** for the type of the application.
+ 1. Select **Console App (.NET Core)** from the results list.
+ 1. Then, select **Next**.
++
+### Add the Event Hubs NuGet package
+
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Run the following command to install the **Azure.Messaging.EventHubs** NuGet package and **Azure.Messaging.EventHubs.Producer** NuGet package:
+
+ ```cmd
+ Install-Package Azure.Messaging.EventHubs
+ ```
+
+ ```cmd
+ Install-Package Azure.Messaging.EventHubs.Producer
+ ```
++
+### Write code to send messages to the event hub
+
+1. Add the following `using` statements to the top of the **Program.cs** file:
+
+ ```csharp
+ using System;
+ using System.Text;
+ using System.Threading.Tasks;
+ using Azure.Messaging.EventHubs;
+ using Azure.Messaging.EventHubs.Producer;
+ ```
+
+2. Add constants to the `Program` class for the Event Hubs connection string and Event Hub name.
+
+ ```csharp
+ private const string connectionString = "<EVENT HUBS NAMESPACE - CONNECTION STRING>";
+ private const string eventHubName = "<EVENT HUB NAME>";
+ ```
+
+ You can get event hub namespace associated with purview account by looking at Atlas kafka endpoint primary/secondary connection strings in properties tab of Purview account.
+
+ :::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="Event Hub Namespace":::
+
+ The event hub name should be **ATLAS_HOOK** for sending messages to Purview.
+
+3. Replace the `Main` method with the following `async Main` method and add an `async ProduceMessage` to push messages into Purview. See the code comments for details.
+
+ ```csharp
+ static async Task Main()
+ {
+ // Read from the default consumer group: $Default
+ string consumerGroup = EventHubConsumerClient.DefaultConsumerGroupName;
+
+ / Create an event producer client to add events in the event hub
+ EventHubProducerClient producer = new EventHubProducerClient(ehubNamespaceConnectionString, eventHubName);
+
+ await ProduceMessage(producer);
+ }
+
+ static async Task ProduceMessage(EventHubProducerClient producer)
+
+ {
+ // Create a batch of events
+ using EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
+
+ // Add events to the batch. An event is a represented by a collection of bytes and metadata.
+ eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("<First event>")));
+ eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("<Second event>")));
+ eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("<Third event>")));
+
+ // Use the producer client to send the batch of events to the event hub
+ await producerClient.SendAsync(eventBatch);
+ Console.WriteLine("A batch of 3 events has been published.");
+
+ }
+ ```
+5. Build the project, and ensure that there are no errors.
+6. Run the program and wait for the confirmation message.
+
+ > [!NOTE]
+ > For the complete source code with more informational comments, see [this file on the GitHub](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/samples/Sample04_PublishingEvents.md)
+
+### Sample Create Entity JSON message to create a sql table with two columns.
+
+```json
+
+ {
+ "msgCreatedBy": "nayenama",
+ "message": {
+ "entities": {
+ "referredEntities": {
+ "-1102395743156037": {
+ "typeName": "azure_sql_column",
+ "attributes": {
+ "owner": null,
+ "userTypeId": 61,
+ "qualifiedName": "mssql://nayenamakafka.eventhub.sql.net/salespool/dbo/SalesOrderTable#OrderID",
+ "precision": 23,
+ "length": 8,
+ "description": "Sales Order ID",
+ "scale": 3,
+ "name": "OrderID",
+ "data_type": "int",
+ "table": {
+ "guid": "-1102395743156036",
+ "typeName": "azure_sql_table",
+ "entityStatus": "ACTIVE",
+ "displayText": "SalesOrderTable",
+ "uniqueAttributes": {
+ "qualifiedName": "mssql://nayenamakafka.eventhub.sql.net/salespool/dbo/SalesOrderTable"
+ }
+ }
+ },
+ "guid": "-1102395743156037",
+ "version": 2
+ },
+ "-1102395743156038": {
+ "typeName": "azure_sql_column",
+ "attributes": {
+ "owner": null,
+ "userTypeId": 61,
+ "qualifiedName": "mssql://nayenamakafka.eventhub.sql.net/salespool/dbo/SalesOrderTable#OrderDate",
+ "description": "Sales Order Date",
+ "scale": 3,
+ "name": "OrderDate",
+ "data_type": "datetime",
+ "table": {
+ "guid": "-1102395743156036",
+ "typeName": "azure_sql_table",
+ "entityStatus": "ACTIVE",
+ "displayText": "SalesOrderTable",
+ "uniqueAttributes": {
+ "qualifiedName": "mssql://nayenamakafka.eventhub.sql.net/salespool/dbo/SalesOrderTable"
+ }
+ }
+ },
+ "guid": "-1102395743156038",
+ "status": "ACTIVE",
+ "createdBy": "ServiceAdmin",
+ "version": 0
+ }
+ },
+ "entity":
+ {
+ "typeName": "azure_sql_table",
+ "attributes": {
+ "owner": "admin",
+ "temporary": false,
+ "qualifiedName": "mssql://nayenamakafka.eventhub.sql.net/salespool/dbo/SalesOrderTable",
+ "name" : "SalesOrderTable",
+ "description": "Sales Order Table added via Kafka",
+ "columns": [
+ {
+ "guid": "-1102395743156037",
+ "typeName": "azure_sql_column",
+ "uniqueAttributes": {
+ "qualifiedName": "mssql://nayenamakafka.eventhub.sql.net/salespool/dbo/SalesOrderTable#OrderID"
+ }
+ },
+ {
+ "guid": "-1102395743156038",
+ "typeName": "azure_sql_column",
+ "uniqueAttributes": {
+ "qualifiedName": "mssql://nayenamakafka.eventhub.sql.net/salespool/dbo/SalesOrderTable#OrderDate"
+ }
+ }
+ ]
+ },
+ "guid": "-1102395743156036",
+ "version": 0
+ }
+ },
+ "type": "ENTITY_CREATE_V2",
+ "user": "admin"
+ },
+ "version": {
+ "version": "1.0.0"
+ },
+ "msgCompressionKind": "NONE",
+ "msgSplitIdx": 1,
+ "msgSplitCount": 1
+}
+
+```
+
+## Consume messages from Purview
+This section shows how to write a .NET Core console application that receives messages from an event hub using an event processor. You need to use ATLAS_ENTITIES event hub to receive messages from Purview.The event processor simplifies receiving events from event hubs by managing persistent checkpoints and parallel receptions from those event hubs.
+
+> [!WARNING]
+> If you run this code on Azure Stack Hub, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blob Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code.
+>
+> For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see [this sample on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/).
+
+
+### Create an Azure Storage and a blob container
+In this quickstart, you use Azure Storage as the checkpoint store. Follow these steps to create an Azure Storage account.
+
+1. [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal)
+2. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)
+3. [Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+
+ Note down the connection string and the container name. You'll use them in the receive code.
++
+### Create a project for the receiver
+
+1. In the Solution Explorer window, right-click the **EventHubQuickStart** solution, point to **Add**, and select **New Project**.
+1. Select **Console App (.NET Core)**, and select **Next**.
+1. Enter **PurviewKafkaConsumer** for the **Project name**, and select **Create**.
+
+### Add the Event Hubs NuGet package
+
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Run the following command to install the **Azure.Messaging.EventHubs** NuGet package:
+
+ ```cmd
+ Install-Package Azure.Messaging.EventHubs
+ ```
+1. Run the following command to install the **Azure.Messaging.EventHubs.Processor** NuGet package:
+
+ ```cmd
+ Install-Package Azure.Messaging.EventHubs.Processor
+ ```
+
+### Update the Main method
+
+1. Add the following `using` statements at the top of the **Program.cs** file.
+
+ ```csharp
+ using System;
+ using System.Text;
+ using System.Threading.Tasks;
+ using Azure.Storage.Blobs;
+ using Azure.Messaging.EventHubs;
+ using Azure.Messaging.EventHubs.Consumer;
+ using Azure.Messaging.EventHubs.Processor;
+ ```
+1. Add constants to the `Program` class for the Event Hubs connection string and the event hub name. Replace placeholders in brackets with the proper values that you got when creating the event hub. Replace placeholders in brackets with the proper values that you got when creating the event hub and the storage account (access keys - primary connection string). Make sure that the `{Event Hubs namespace connection string}` is the namespace-level connection string, and not the event hub string.
+
+ ```csharp
+ private const string ehubNamespaceConnectionString = "<EVENT HUBS NAMESPACE - CONNECTION STRING>";
+ private const string eventHubName = "<EVENT HUB NAME>";
+ private const string blobStorageConnectionString = "<AZURE STORAGE CONNECTION STRING>";
+ private const string blobContainerName = "<BLOB CONTAINER NAME>";
+ ```
+
+ You can get event hub namespace associated with purview account by looking at Atlas kafka endpoint primary/secondary connection strings in properties tab of Purview account.
+
+ :::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="Event Hub Namespace":::
+
+ The event hub name should be **ATLAS_ENTITIES** for sending messages to Purview.
+
+3. Replace the `Main` method with the following `async Main` method. See the code comments for details.
+
+ ```csharp
+ static async Task Main()
+ {
+ // Read from the default consumer group: $Default
+ string consumerGroup = EventHubConsumerClient.DefaultConsumerGroupName;
+
+ // Create a blob container client that the event processor will use
+ BlobContainerClient storageClient = new BlobContainerClient(blobStorageConnectionString, blobContainerName);
+
+ // Create an event processor client to process events in the event hub
+ EventProcessorClient processor = new EventProcessorClient(storageClient, consumerGroup, ehubNamespaceConnectionString, eventHubName);
+
+ // Register handlers for processing events and handling errors
+ processor.ProcessEventAsync += ProcessEventHandler;
+ processor.ProcessErrorAsync += ProcessErrorHandler;
+
+ // Start the processing
+ await processor.StartProcessingAsync();
+
+ // Wait for 10 seconds for the events to be processed
+ await Task.Delay(TimeSpan.FromSeconds(10));
+
+ // Stop the processing
+ await processor.StopProcessingAsync();
+ }
+ ```
+1. Now, add the following event and error handler methods to the class.
+
+ ```csharp
+ static async Task ProcessEventHandler(ProcessEventArgs eventArgs)
+ {
+ // Write the body of the event to the console window
+ Console.WriteLine("\tReceived event: {0}", Encoding.UTF8.GetString(eventArgs.Data.Body.ToArray()));
+
+ // Update checkpoint in the blob storage so that the app receives only new events the next time it's run
+ await eventArgs.UpdateCheckpointAsync(eventArgs.CancellationToken);
+ }
+
+ static Task ProcessErrorHandler(ProcessErrorEventArgs eventArgs)
+ {
+ // Write details about the error to the console window
+ Console.WriteLine($"\tPartition '{ eventArgs.PartitionId}': an unhandled exception was encountered. This was not expected to happen.");
+ Console.WriteLine(eventArgs.Exception.Message);
+ return Task.CompletedTask;
+ }
+ ```
+1. Build the project, and ensure that there are no errors.
+
+ > [!NOTE]
+ > For the complete source code with more informational comments, see [this file on the GitHub](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/Sample01_HelloWorld.md).
+6. Run the receiver application.
+
+### Sample Message received from Purview
+
+```json
+{
+ "version":
+ {"version":"1.0.0",
+ "versionParts":[1]
+ },
+ "msgCompressionKind":"NONE",
+ "msgSplitIdx":1,
+ "msgSplitCount":1,
+ "msgSourceIP":"10.244.155.5",
+ "msgCreatedBy":
+ "",
+ "msgCreationTime":1618588940869,
+ "message":{
+ "type":"ENTITY_NOTIFICATION_V2",
+ "entity":{
+ "typeName":"azure_sql_table",
+ "attributes":{
+ "owner":"admin",
+ "createTime":0,
+ "qualifiedName":"mssql://nayenamakafka.eventhub.sql.net/salespool/dbo/SalesOrderTable",
+ "name":"SalesOrderTable",
+ "description":"Sales Order Table"
+ },
+ "guid":"ead5abc7-00a4-4d81-8432-d5f6f6f60000",
+ "status":"ACTIVE",
+ "displayText":"SalesOrderTable"
+ },
+ "operationType":"ENTITY_UPDATE",
+ "eventTime":1618588940567
+ }
+}
+```
+
+> [!IMPORTANT]
+> Atlas currently supports the following operation types: **ENTITY_CREATE_V2**, **ENTITY_PARTIAL_UPDATE_V2**, **ENTITY_FULL_UPDATE_V2**, **ENTITY_DELETE_V2**. Pushing messages to Purview is currently enabled by default. If the scenario involves reading from Purview contact us as it needs to be allow-listed. (provide subscription id and name of Purview account).
++
+## Next steps
+Check out the samples on GitHub.
+
+- [Event Hubs samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples)
+- [Event processor samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples)
+- [Atlas introduction to notifications](https://atlas.apache.org/2.0.0/Notifications.html)
purview Tutorial Import Create Glossary Terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-import-create-glossary-terms.md
In this procedure, you import glossary terms via a .csv file:
This file contains a list of pre-populated terms that are relevant to your data estate.
+ > [!Important]
+ > The email address for Stewards and Experts in the .CSV file should be the primary address of the user from AAD group. Alternate email, user principal name and non-AAD emails are not yet supported. You need to replace the email addresses with the AAD primary address from your organization.
+ 1. To begin importing, select **Glossary**, and then select **Import terms**. :::image type="content" source="./media/tutorial-import-create-glossary-terms/import-glossary-terms-select.png" alt-text="Screenshot showing how to import glossary terms.":::
remote-rendering Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/resources/troubleshoot.md
Coplanar surfaces can have a number of different causes:
In some cases, custom native C++ apps that use a multi-pass stereo rendering mode for local content (rendering to the left and right eye in separate passes) after calling [**BlitRemoteFrame**](../concepts/graphics-bindings.md#render-remote-image) can trigger a driver bug. The bug results in non-deterministic rasterization glitches, causing individual triangles or parts of triangles of the local content to randomly disappear. For performance reasons, it is recommended anyway to render local content with a more modern single-pass stereo rendering technique, for example using **SV_RenderTargetArrayIndex**.
+## Conversion File Download Errors
+
+The Conversion service may encounter errors downloading files from blob storage because of path length limits imposed by Windows and the service. File paths and file names in your blob storage must not exceed 178 characters. For example given a `blobPrefix` of `models/Assets` which is 13 characters:
+
+`models/Assets/<any file or folder path greater than 164 characters will fail the conversion>`
+
+The Conversion service will download all files specified under the `blobPrefix`, not just the files used in the conversion. The files/folder causing issues may be less obvious in these cases so it's important to check everything contained in the storage account under `blobPrefix`. See the example inputs below for what gets downloaded.
+``` json
+{
+ "settings": {
+ "inputLocation": {
+ "storageContainerUri": "https://contosostorage01.blob.core.windows.net/arrInput",
+ "blobPrefix": "models/Assets",
+ "relativeInputAssetPath": "myAsset.fbx"
+ ...
+ }
+}
+```
+
+```
+models
+Γö£ΓöÇΓöÇΓöÇAssets
+Γöé Γöé myAsset.fbx <- Asset
+Γöé Γöé
+Γöé ΓööΓöÇΓöÇΓöÇTextures
+Γöé | myTexture.png <- Used in conversion
+Γöé |
+| ΓööΓöÇΓöÇΓöÇMyFiles
+| myOtherFile.txt <- File also downloaded under blobPrefix
+|
+ΓööΓöÇΓöÇΓöÇOtherFiles
+ myReallyLongFileName.txt <- Ignores files not under blobPrefix
+```
## Next steps * [System requirements](../overview/system-requirements.md)
search Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/security-baseline.md
Alternatively, you can enable and on-board this data to Azure Sentinel or a thir
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Search**:
security Antimalware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/antimalware.md
To enable and configure Microsoft Antimalware using PowerShell cmdlets:
The Antimalware XML configuration settings template is included in the [Microsoft Antimalware For Azure - Code Samples](/samples/browse/?redirectedfrom=TechNet-Gallery "Microsoft Antimalware For Azure - Code Samples"), showing the supported Antimalware configuration settings.
-### Enable and configure Antimalware to Azure Cloud Service Extended Support (CS-ES) using PowerShell cmdlets
-
-To enable and configure Microsoft Antimalware using PowerShell cmdlets:
-
-1. Set up your PowerShell environment - Refer to the documentation at <https://github.com/Azure/azure-powershell>
-2. Use the [New-AzCloudServiceExtensionObject](/powershell/module/az.cloudservice/new-azcloudserviceextensionobject?view=azps-5.7.0&preserve-view=true) cmdlet to enable and configure Microsoft Antimalware for your Cloud Service VM.
-
-The following code sample is available:
--- [Add Microsoft Antimalware to Azure Cloud Service using Extended Support(CS-ES)](antimalware-code-samples.md#add-microsoft-antimalware-to-azure-cloud-service-using-extended-support)- ### Cloud Services and Virtual Machines - Configuration Using PowerShell cmdlets An Azure application or service can retrieve the Microsoft Antimalware configuration for Cloud Services and Virtual Machines using PowerShell cmdlets.
The following code samples are available:
- [Deploy Microsoft Antimalware on ARM VMs](antimalware-code-samples.md#enable-and-configure-microsoft-antimalware-for-azure-resource-manager-vms) - [Add Microsoft Antimalware to Azure Service Fabric Clusters](antimalware-code-samples.md#add-microsoft-antimalware-to-azure-service-fabric-clusters)
+### Enable and configure Antimalware to Azure Cloud Service Extended Support (CS-ES) using PowerShell cmdlets
+
+To enable and configure Microsoft Antimalware using PowerShell cmdlets:
+
+1. Set up your PowerShell environment - Refer to the documentation at <https://github.com/Azure/azure-powershell>
+2. Use the [New-AzCloudServiceExtensionObject](/powershell/module/az.cloudservice/new-azcloudserviceextensionobject?view=azps-5.7.0&preserve-view=true) cmdlet to enable and configure Microsoft Antimalware for your Cloud Service VM.
+
+The following code sample is available:
+
+- [Add Microsoft Antimalware to Azure Cloud Service using Extended Support(CS-ES)](antimalware-code-samples.md#add-microsoft-antimalware-to-azure-cloud-service-using-extended-support)
+ ## Next steps See [code samples](antimalware-code-samples.md) to enable and configure Microsoft Antimalware for Azure Resource Manager (ARM) virtual machines.
service-fabric Service Fabric Stateless Node Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-stateless-node-types.md
description: Learn how to create and deploy stateless node types in Azure Servic
Previously updated : 09/25/2020 Last updated : 04/16/2021 # Deploy an Azure Service Fabric cluster with stateless-only node types
spring-cloud Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/tutorial-custom-domain.md
Certificates encrypt web traffic. These TLS/SSL certificates can be stored in Az
## Keyvault Private Link Considerations
-The Azure Spring Cloud management IPs aren't part of the Azure Trusted Microsoft services. Therefore, to allow Azure Spring Cloud to load certificates from a Key Vault protected with Private endpoint connections, you must add the following IPs to Azure Key Vault Firewall:
+The Azure Spring Cloud management IPs are not yet part of the Azure Trusted Microsoft services. Therefore, to allow Azure Spring Cloud to load certificates from a Key Vault protected with Private endpoint connections, you must add the following IPs to Azure Key Vault Firewall:
``` 20.53.123.160 52.143.241.210 40.65.234.114 52.142.20.14 20.54.40.121 40.80.210.49 52.253.84.152 20.49.137.168 40.74.8.134 51.143.48.243
static-web-apps Local Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/local-development.md
The following chart shows how requests are handled locally.
:::image type="content" source="media/local-development/cli-conceptual.png" alt-text="Azure Static Web App CLI request and response flow"::: > [!IMPORTANT]
-> Navigate to [http://localhost:4280](http://localhost:4280) to access the application served by the CLI.
+> Navigate to `http://localhost:4280` to access the application served by the CLI.
- **Requests** made to port `4280` are forwarded to the appropriate server depending on the type of request.
Open a terminal to the root folder of your existing Azure Static Web Apps site.
`swa start`
-1. Navigate to [http://localhost:4280](http://localhost:4280) to view the app in the browser.
+1. Navigate to http://localhost:4280 to view the app in the browser.
### Other ways to start the CLI
storage Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/security-baseline.md
Note: Classic storage accounts do not support firewalls and virtual networks.
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Storage**:
Additionally, use Virtual network service endpoint policies to filter egress vir
**Responsibility**: Shared
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Storage**:
Additional information is available at the referenced links.
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.ClassicStorage**:
storage Storage Explorer Sign In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-explorer-sign-in.md
+
+ Title: Sign in to Azure Storage Explorer | Microsoft Docs
+description: Documentation on signing into Azure Storage Explorer
++++ Last updated : 04/01/2021+++
+# Sign in to Storage Explorer
+
+Sign-in is the recommended way to access your Azure storage resources with Storage Explorer. By signing in you take advantage of Azure AD backed permissions, such as RBAC and Gen2 POSIX ACLs.
+
+## How to sign in
+
+To sign in to Storage Explorer, open the **Connect dialog**. You can open the **Connect dialog** either from the left-hand vertical toolbar, or by clicking on **Add account...** on the **Account Panel**.
+
+Once you have the dialog open, choose **Subscription** as the type of resource you want to connect to and click **Next**.
+
+You now need to choose what Azure environment you want to sign into. You can pick from any of the known environments, such as Azure or Azure China, or you can add your own environment. Once you have your environment selected, click **Next**.
+
+At this point, your OS' **default web browser** will launch and a sign-in page will be opened. For best results, leave this browser window open as long as you're using Storage Explorer or at least until you've performed all expected MFA. When you have finished signing in, you can return to Storage Explorer.
+
+## Managing accounts
+
+You can manage and remove Azure accounts that you've signed into from the **Account Panel**. You can open the **Account Panel** by clicking on the **Manage Accounts** button on the left-hand vertical toolbar.
+
+In the **Account Panel** you'll see any accounts that you have signed into. Under each account will be:
+- The tenants the account belongs to
+- For each tenant, the subscriptions you have access to
+
+By default, Storage Explorer only signs you into your home tenant. If you want to view subscriptions and resources from another tenant, you'll need to activate that tenant. To activate a tenant, check the checkbox next to it. Once you're done working with a tenant, you can uncheck its checkbox to deactivate it. You cannot deactivate your home tenant.
+
+After activating a tenant, you may need to reenter your credentials before Storage Explorer can load subscriptions or access resources from the tenant. Having to reenter your credentials usually happens because of a conditional access (CA) policy such as multi-factor authentication (MFA). And even though you may have already performed MFA for another tenant, you might still have to do it again. To reenter your credentials, simply click on **Reenter credentials...**. You can also click on **Error details...** to see exactly why subscriptions failed to load.
+
+Once your subscriptions have loaded, you can choose which ones you want to filter in/out by checking or unchecking their checkboxes.
+
+If you want to remove your entire Azure account, then click on the **Remove** next to the account.
+
+## Changing where sign-in happens
+
+By default sign-in will happen in your OS' **default web browser**. Signing-in with your default web browser streamlines how you access resources secured via CA policies, such as MFA. If for some reason signing in with your OS' **default web browser** isn't working, you can change where or how Storage Explorer performs sign-in.
+
+Under **Settings** > **Application** > **Sign-in**, look for the **Sign in with** setting. There are three options:
+- **Default Web Browser**: sign-in will happen in your OS' **default web browser**. This option is recommended.
+- **Integrated Sign-In**: sign-in will happen in a Storage Explorer window. This option may be useful if you're trying to log in with multiple Microsoft accounts (MSAs) at once. You may have issues with some CA policies if you choose this option.
+- **Device Code Flow**: Storage Explorer will give you a code to enter into a browser window. This option isn't recommended. Device code flow isn't compatible with many CA policies.
+
+## Troubleshooting sign-in issues
+
+If you're having trouble signing in, or are having issues with an Azure account after signing in, refer to the [sign in section of the Storage Explorer troubleshooting guide](./storage-explorer-troubleshooting.md#sign-in-issues).
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-explorer-troubleshooting.md
If you can't find any self-signed certificates by following these steps, contact
## Sign-in issues
-### Blank sign-in dialog box
+### Understanding sign-in
-Blank sign-in dialog boxes most often occur when Active Directory Federation Services (AD FS) prompts Storage Explorer to perform a redirect, which is unsupported by Electron. To work around this issue, you can try to use Device Code Flow for sign-in. To do so, follow these steps:
+Make sure you have read the [Sign in to Storage Explorer](./storage-explorer-sign-in.md) documentation.
-1. On the left vertical tool bar, open **Settings**. In the Settings Panel, go to **Application** > **Sign in**. Enable **Use device code flow sign-in**.
-2. Open the **Connect** dialog box (either through the plug icon on the left-side vertical bar or by selecting **Add Account** on the account panel).
-3. Choose the environment you want to sign in to.
-4. Select **Sign In**.
-5. Follow the instructions on the next panel.
+### Frequently having to reenter credentials
-If you can't sign in to the account you want to use because your default browser is already signed in to a different account, do one of the following:
+Having to reenter credentials is most likely the result of conditional access policies set by your AAD administrator. When Storage Explorer asks you to reenter credentials from the account panel, you should see an **Error details...** link. Click on that to see why Storage Explorer is asking you to reenter credentials. Conditional access policy errors that require reentering of credentials may look something like these:
+- The refresh token has expired...
+- You must use multi-factor authentication to access...
+- Due to a configuration change made by your administrator...
-- Manually copy the link and code into a private session of your browser.-- Manually copy the link and code into a different browser.
+To reduce the frequency of having to reenter credentials due to errors like the ones above, you will need to talk to your AAD administrator.
+
+### Conditional access policies
+
+If you have conditional access policies that need to be satisfied for your account, make sure you are using the **Default Web Browser** value for the **Sign in with** setting. For information on that setting, see [Changing where sign in happens](./storage-explorer-sign-in.md#changing-where-sign-in-happens).
+
+### Unable to acquire token, tenant is filtered out
+
+If you see an error message saying that a token cannot be acquired because a tenant is filtered out, that means you are trying to access a resource which is in a tenant you have filtered out. To unfilter the tenant, go to the **Account Panel** and make sure the checkbox for the tenant specified in the error is checked. Refer to the [Managing accounts](./storage-explorer-sign-in.md#managing-accounts) for more information on filtering tenants in Storage Explorer.
+
+## Authentication library failed to start properly
+
+If on startup you see an error message which says that Storage Explorer's authentication library failed to start properly then make sure your install environment meets all [prerequisites](../../vs-azure-tools-storage-manage-with-storage-explorer.md#prerequisites). Not meeting prerequisites is the most likely cause of this error message.
+
+If you believe that your install environment meets all prerequisites, then [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues/new). When you open your issue, make sure to include:
+- Your OS.
+- What version of Storage Explorer you are trying to use.
+- If you checked the prerequisites.
+- [Authentication logs](#authentication-logs) from an unsuccessful launch of Storage Explorer. Verbose authentication logging is automatically enabled after this type of error occurs.
+
+### Blank window when using integrated sign-in
+
+If you have chosen to use **Integrated Sign-in** and are seeing a blank sign window, you will likely need to switch to a different sign-in method. Blank sign-in dialog boxes most often occur when an Active Directory Federation Services (ADFS) server prompts Storage Explorer to perform a redirect that is unsupported by Electron.
+
+To change to a different sign-in method by changing the **Sign in with** setting under **Settings** > **Application** > **Sign-in**. For information on the different types of sign-in methods, see [Changing where sign in happens](./storage-explorer-sign-in.md#changing-where-sign-in-happens).
### Reauthentication loop or UPN change
-If you're in a reauthentication loop or have changed the UPN of one of your accounts, follow these steps:
+If you're in a reauthentication loop or have changed the UPN of one of your accounts then try these steps:
-1. Remove all accounts and then close Storage Explorer.
-2. Delete the .IdentityService folder from your machine. On Windows, the folder is located at `C:\users\<username>\AppData\Local`. For Mac and Linux, you can find the folder at the root of your user directory.
-3. If you're running Mac or Linux, you'll also need to delete the Microsoft.Developer.IdentityService entry from your operating system's keystore. On the Mac, the keystore is the *Gnome Keychain* application. In Linux, the application is typically called _Keyring_, but the name might differ depending on your distribution.
+1. Open Storage Explorer
+2. Go to Help > Reset
+3. Make sure at least Authentication is checked. You can uncheck other items you do not want to reset.
+4. Click the Reset button
+5. Restart Storage Explorer and try signing in again.
-### Conditional Access
+If you continue to have issues after doing a reset then try these steps:
-Because of a limitation in the Azure AD Library used by Storage Explorer, Conditional Access isn't supported when Storage Explorer is being used on Windows 10, Linux, or macOS.
+1. Open Storage Explorer
+2. Remove all accounts and then close Storage Explorer.
+3. Delete the `.IdentityService` folder from your machine. On Windows, the folder is located at `C:\users\<username>\AppData\Local`. For Mac and Linux, you can find the folder at the root of your user directory.
+4. If you're running Mac or Linux, you'll also need to delete the Microsoft.Developer.IdentityService entry from your operating system's keystore. On the Mac, the keystore is the *Gnome Keychain* application. In Linux, the application is typically called _Keyring_, but the name might differ depending on your distribution.
+6. Restart Storage Explorer and try signing in again.
-## Mac Keychain errors
+### macOS: keychain errors or no sign-in window
The macOS Keychain can sometimes enter a state that causes issues for the Storage Explorer authentication library. To get the Keychain out of this state, follow these steps:
The macOS Keychain can sometimes enter a state that causes issues for the Storag
6. You're prompted with a message like "Service hub wants to access the Keychain." Enter your Mac admin account password and select **Always Allow** (or **Allow** if **Always Allow** isn't available). 7. Try to sign in.
-### General sign-in troubleshooting steps
+### Default browser doesn't open
+
+If your default browser does not open when trying to sign in try all of the following techniques:
+- Restart Storage Explorer
+- Open your browser manually before starting sign-in
+- Try using **Integrated Sign-In**, see [Changing where sign in happens](./storage-explorer-sign-in.md#changing-where-sign-in-happens) for instructions on how to do this.
-* If you're on macOS, and the sign-in window never appears over the **Waiting for authentication** dialog box, try [these steps](#mac-keychain-errors).
-* Restart Storage Explorer.
-* If the authentication window is blank, wait at least one minute before closing the authentication dialog box.
-* Make sure that your proxy and certificate settings are properly configured for both your machine and Storage Explorer.
-* If you're running Windows and have access to Visual Studio 2019 on the same machine and to the sign-in credentials, try signing in to Visual Studio 2019. After a successful sign-in to Visual Studio 2019, you can open Storage Explorer and see your account in the account panel.
+### Other sign-in issues
-If none of these methods work, [open an issue in GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues).
+If none of the above apply to your sign-in issue or if they fail to resolve you sign-in issue [open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues).
### Missing subscriptions and broken tenants
If you can't retrieve your subscriptions after you successfully sign in, try the
* Make sure you've signed in through the correct Azure environment (Azure, Azure China 21Vianet, Azure Germany, Azure US Government, or Custom Environment). * If you're behind a proxy server, make sure you've configured the Storage Explorer proxy correctly. * Try removing and re-adding the account.
-* If there's a "More information" link, check which error messages are being reported for the tenants that are failing. If you aren't sure how to respond to the error messages, feel free to [open an issue in GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues).
+* If there's a "More information" or "Error details" link, check which error messages are being reported for the tenants that are failing. If you aren't sure how to respond to the error messages, feel free to [open an issue in GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues).
-## Can't remove an attached account or storage resource
+## Can't remove an attached storage account or resource
If you can't remove an attached account or storage resource through the UI, you can manually delete all attached resources by deleting the following folders:
Part 3: Sanitize the Fiddler trace
## Next steps
-If none of these solutions work for you, [open an issue in GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues). You can also do this by selecting the **Report issue to GitHub** button in the lower-left corner.
+If none of these solutions work for you, you can:
+- Create a support ticket
+- [Open an issue on GitHub](https://github.com/Microsoft/AzureStorageExplorer/issues). You can also do this by selecting the **Report issue to GitHub** button in the lower-left corner.
![Feedback](./media/storage-explorer-troubleshooting/feedback-button.PNG)
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-redundancy.md
Previously updated : 03/05/2021 Last updated : 04/16/2021
The following table shows which types of storage accounts support ZRS in which r
| Storage account type | Supported regions | Supported services | |--|--|--|
-| General-purpose v2<sup>1</sup> | (Africa) South Africa North<br /> (Asia Pacific) East Asia<br /> (Asia Pacific) Southeast Asia<br /> (Asia Pacific) Australia East<br /> (Asia Pacific) Central India<br /> (Asia Pacific) Japan East<br /> (Asia Pacific) Korea Central<br /> (Canada) Canada Central<br /> (Europe) North Europe<br /> (Europe) West Europe<br /> (Europe) France Central<br /> (Europe) Germany West Central<br /> (Europe) Norway East<br /> (Europe) Switzerland North<br /> (Europe) UK South<br /> (Middle East) UAE North<br /> (South America) Brazil South<br /> (US) Central US<br /> (US) East US<br /> (US) East US 2<br /> (US) North Central US<br />(US) South Central US<br /> (US) West US<br /> (US) West US 2 | Block blobs<br /> Page blobs<sup>2</sup><br /> File shares (standard)<br /> Tables<br /> Queues<br /> |
+| General-purpose v2<sup>1</sup> | (Africa) South Africa North<br /> (Asia Pacific) Southeast Asia<br /> (Asia Pacific) Australia East<br /> (Asia Pacific) Japan East<br /> (Canada) Canada Central<br /> (Europe) North Europe<br /> (Europe) West Europe<br /> (Europe) France Central<br /> (Europe) Germany West Central<br /> (Europe) UK South<br /> (South America) Brazil South<br /> (US) Central US<br /> (US) East US<br /> (US) East US 2<br /> (US) South Central US<br /> (US) West US<br /> (US) West US 2 | Block blobs<br /> Page blobs<sup>2</sup><br /> File shares (standard)<br /> Tables<br /> Queues<br /> |
| BlockBlobStorage<sup>1</sup> | Asia Southeast<br /> Australia East<br /> Europe North<br /> Europe West<br /> France Central <br /> Japan East<br /> UK South <br /> US East <br /> US East 2 <br /> US West 2| Premium block blobs only | | FileStorage | Asia Southeast<br /> Australia East<br /> Europe North<br /> Europe West<br /> France Central <br /> Japan East<br /> UK South <br /> US East <br /> US East 2 <br /> US West 2 | Premium files shares only |
storage Migration Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md
The following comparison matrix shows basic functionality of different tools tha
| **Age distribution over time** | No | Yes | Yes | Yes | | **Access time** | No | Yes | Yes | Yes | | **Modified time** | No | Yes | Yes | Yes |
-| **Creation time** | No | Yes | Yes | Yes (SMB only) |
+| **Creation time** | No | Yes | Yes | Yes |
| **Per file / object report status** | Partial | Yes | Yes | Yes | ## Licensing
synapse-analytics Restore Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/backuprestore/restore-sql-pool.md
Title: Restore an existing dedicated SQL pool description: How-to guide for restoring an existing dedicated SQL pool.-
synapse-analytics Sqlpool Create Restore Point https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/backuprestore/sqlpool-create-restore-point.md
Title: Create a user defined restore point for a dedicated SQL pool description: Learn how to use the Azure portal to create a user-defined restore point for dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics How To Discover Connect Analyze Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/how-to-discover-connect-analyze-azure-purview.md
Title: Discover, connect, and explore data in Synapse using Azure Purview description: Guide on how to discover data, connect them and explore them in Synapse- -++ Last updated 12/16/2020
synapse-analytics Quickstart Connect Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md
Title: Connect an Azure Purview AccountΓÇ» description: Connect an Azure Purview Account to a Synapse workspace.- -++ Last updated 12/16/2020
synapse-analytics Continuous Integration Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/cicd/continuous-integration-deployment.md
Title: Continuous integration and delivery for Synapse workspace description: Learn how to use continuous integration and delivery to deploy changes in workspace from one environment (development, test, production) to another.- -++ Last updated 11/20/2020
synapse-analytics Source Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/cicd/source-control.md
Title: Source control in Synapse Studio description: Learn how to configure source control in Azure Synapse Studio- -++ Last updated 11/20/2020
synapse-analytics Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/concepts-data-flow-overview.md
+ Last updated 12/16/2020
synapse-analytics Concepts Data Factory Differences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/concepts-data-factory-differences.md
Title: Differences from Azure Data Factory
description: Learn how the data integration capabilities of Azure Synapse Analytics differ from those of Azure Data Factory -++ Last updated 12/10/2020
synapse-analytics Data Integration Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/data-integration-data-lake.md
Title: Ingest into Azure Data Lake Storage Gen2
description: Learn how to ingest data into Azure Data Lake Storage Gen2 in Azure Synapse Analytics -++ - Last updated 04/15/2020
synapse-analytics Data Integration Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/data-integration-sql-pool.md
-+ Last updated 11/03/2020
synapse-analytics Linked Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/linked-service.md
-+ Last updated 04/15/2020
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
Previously updated : 12/31/2020 Last updated : 04/15/2021 # Analyze data with a serverless SQL pool
Every workspace comes with a pre-configured serverless SQL pool called **Built-i
## Analyze NYC Taxi data with a serverless SQL pool - 1. In Synapse Studio, go to the **Develop** hub 1. Create a new SQL script. 1. Paste the following code into the script.
Every workspace comes with a pre-configured serverless SQL pool called **Built-i
TOP 100 * FROM OPENROWSET(
- BULK 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet',
+ BULK 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet',
FORMAT='PARQUET' ) AS [result] ```
-1. Click **Run**
+1. Click **Run**.
## Next steps
synapse-analytics Migrate To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/migration-guides/migrate-to-synapse-analytics-guide.md
Title: "Azure Synapse Analytics: Migration guide" description: Follow this guide to migrate your databases to an Azure Synapse Analytics dedicated SQL pool. -+ ms.devlang:
synapse-analytics Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/plan-manage-costs.md
+ Last updated 12/09/2020
synapse-analytics Quickstart Copy Activity Load Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-copy-activity-load-sql-pool.md
+ Last updated 11/02/2020
synapse-analytics Quickstart Create Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-workspace-powershell.md
-+ Last updated 10/19/2020
synapse-analytics Quickstart Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-data-flow.md
description: This tutorial provides step-by-step instructions for using Azure S
-++ Last updated 11/03/2020
synapse-analytics Quickstart Deployment Template Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-deployment-template-workspaces.md
Title: 'Quickstart: Create an Azure Synapse workspace Azure Resource Manager tem
description: Learn how to create a Synapse workspace by using Azure Resource Manager template (ARM template). --++
synapse-analytics Quickstart Load Studio Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-load-studio-sql-pool.md
Title: 'Quickstart: Bulk load data with a dedicated SQL pool' description: Use Synapse Studio to bulk load data into a dedicated SQL pool in Azure Synapse Analytics. -+ Last updated 12/11/2020-+
synapse-analytics Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security-baseline.md
Title: Azure security baseline for Synapse Analytics
description: The Synapse Analytics security baseline provides procedural guidance and resources for implementing the security recommendations specified in the Azure Security Benchmark. + Last updated 03/16/2021
Alternatively, when connecting to your Synapse SQL pool, narrow down the scope o
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Auditing can be enabled both on the database or server level, and is suggested t
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Auditing can be enabled both on the database or server level, and is suggested t
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Alternatively, you may enable and on-board data to Azure Sentinel.
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
SQL Server audit lets you create server audits, which can contain server audit s
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Data Discovery &amp; Classification is built into Azure Synapse SQL. It provides
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Additionally, you can set up a dynamic data masking (DDM) policy in the Azure po
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Authorization is controlled by your user account's database role memberships and
**Responsibility**: Shared
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Additionally, you can set up alerts for databases in your SQL Synapse pool using
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Data Discovery &amp; Classification is built into Azure Synapse SQL. It provides
**Responsibility**: Customer
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
Pre-scan any content being uploaded to non-compute Azure resources, such as App
**Responsibility**: Shared
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
If you are using a customer-managed key to encrypt your Database Encryption Key,
**Responsibility**: Shared
-**Azure Security Center monitoring**: The [Azure Security Benchmark](/home/mbaldwin/docs/asb/azure-docs-pr/articles/governance/policy/samples/azure-security-benchmark.md) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/security-center-recommendations.md). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/home/mbaldwin/docs/asb/azure-docs-pr/articles/security-center/azure-defender.md) plan for the related services.
+**Azure Security Center monitoring**: The [Azure Security Benchmark](/azure/governance/policy/samples/azure-security-benchmark) is the default policy initiative for Security Center and is the foundation for [Security Center's recommendations](/azure/security-center/security-center-recommendations). The Azure Policy definitions related to this control are enabled automatically by Security Center. Alerts related to this control may require an [Azure Defender](/azure/security-center/azure-defender) plan for the related services.
**Azure Policy built-in definitions - Microsoft.Sql**:
synapse-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 04/14/2021-+
synapse-analytics Apache Spark Custom Conda Channel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-custom-conda-channel.md
conda index channel1/linux-64
conda index channel1 ```
-For more information, you can also [visit the Conda user guide](https://docs.conda.io/projects/conda/latest/user-guide/tasks/create-custom-channels.html) to creating custom channels.
+For more information, you can also [visit the Conda user guide](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/create-custom-channels.html) to creating custom channels.
## Storage account permissions Now, we will need to validate the permissions on the storage account. To set these permissions, navigate to the path where custom channel will be created. Then, create a SAS token for ```privatechannel``` that has read, list, and execute permissions.
synapse-analytics Apache Spark Manage Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-manage-python-packages.md
dependencies:
- matplotlib - koalas==1.7.0 ```
-For details on creating an environment from this environment.yml file, see [Creating an environment from an environment.yml file](https://docs.conda.io/projects/conda/latest/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually).
+For details on creating an environment from this environment.yml file, see [Creating an environment from an environment.yml file](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment
+).
#### Update Python packages Once you have identified the environment specification file or set of libraries you want to install on the Spark pool, you can update the Spark pool libraries by navigating to the Azure Synapse Studio or Azure portal. Here, you can provide the environment specification and select the workspace libraries to install.
synapse-analytics Apache Spark Troubleshoot Library Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-troubleshoot-library-errors.md
The Synapse serverless Apache Spark pools are based off the Linux distribution.
To recreate the environment and validate your updates: 1. [Download](https://github.com/Azure-Samples/Synapse/blob/main/Spark/Python/base_environment.yml) the template to locally recreate the Synapse runtime. There may be slight differences between the template and the actual Synapse environment.
- 2. Create a virtual environment using the [following instructions](https://docs.conda.io/projects/conda/latest/user-guide/tasks/manage-environments.html). This environment allows you to create an isolated Python installation with the specified list of libraries.
+ 2. Create a virtual environment using the [following instructions](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment). This environment allows you to create an isolated Python installation with the specified list of libraries.
``` conda myenv create -f environment.yml
To recreate the environment and validate your updates:
3. Use ``pip install -r <provide your req.txt file>`` to update the virtual environment with your specified packages. If the installation results in an error, then there may be a a conflict between what is pre-installed in the Synapse base runtime and what is specified in the provided requirements file. These dependency conflicts must be resolved in order to get the updated libraries on your serverless Apache Spark pool. >[!IMPORTANT]
->Issues may arrise when using pip and conda together. When combining pip and conda, it's best to follow these [recommended best practices](https://docs.conda.io/projects/conda/latest/user-guide/tasks/manage-environments.html#using-pip-in-an-environment).
+>Issues may arrise when using pip and conda together. When combining pip and conda, it's best to follow these [recommended best practices](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment).
## Next steps-- View the default libraries: [Apache Spark version support](apache-spark-version-support.md)
+- View the default libraries: [Apache Spark version support](apache-spark-version-support.md)
synapse-analytics Create Data Warehouse Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-azure-cli.md Binary files differ
synapse-analytics Design Elt Data Loading https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/design-elt-data-loading.md
Title: Instead of ETL, design ELT description: Implement flexible data loading strategies for dedicated SQL pools within Azure Synapse Analytics. -+ Last updated 11/20/2020-+
synapse-analytics Guidance For Loading Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/guidance-for-loading-data.md
Title: Data loading best practices for dedicated SQL pools description: Recommendations and performance optimizations for loading data using dedicated SQL pools in Azure Synapse Analytics. -+ Last updated 11/20/2020-+
synapse-analytics Load Data From Azure Blob Storage Using Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/load-data-from-azure-blob-storage-using-copy.md
Title: 'Tutorial: Load New York Taxicab data' description: Tutorial uses Azure portal and SQL Server Management Studio to load New York Taxicab data from an Azure blob for Synapse SQL. -+ Last updated 11/23/2020-+
synapse-analytics Load Data Wideworldimportersdw https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/load-data-wideworldimportersdw.md
Title: 'Tutorial: Load data using Azure portal & SSMS' description: Tutorial uses Azure portal and SQL Server Management Studio to load the WideWorldImportersDW data warehouse from a global Azure blob to an Azure Synapse Analytics SQL pool. -+ Last updated 01/12/2021-+
synapse-analytics Pause And Resume Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-portal.md
Title: 'Quickstart: Pause and resume compute in dedicated SQL pool via the Azure portal' description: Use the Azure portal to pause compute for dedicated SQL pool to save costs. Resume compute when you are ready to use the data warehouse. --++ Last updated 11/23/2020
synapse-analytics Pause And Resume Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-powershell.md
Title: 'Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL DW) with Azure PowerShell' description: You can use Azure PowerShell to pause and resume dedicated SQL pool (formerly SQL DW). compute resources. --++ Last updated 03/20/2019
synapse-analytics Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-arm-template.md
Title: Create a dedicated SQL pool (formerly SQL DW) by using Azure Resource Man
description: Learn how to create an Azure Synapse Analytics SQL pool by using Azure Resource Manager template. ++ Last updated 06/09/2020-- - subject-armqs - mode-arm
synapse-analytics Quickstart Bulk Load Copy Tsql Examples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql-examples.md
Title: Authentication mechanisms with the COPY statement description: Outlines the authentication mechanisms to bulk load data -+ Last updated 07/10/2020-+
synapse-analytics Quickstart Bulk Load Copy Tsql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md
Title: 'Quickstart: Bulk load data using a single T-SQL statement' description: Bulk load data using the COPY statement -+ Last updated 11/20/2020-+
synapse-analytics Sql Data Warehouse Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-recommendations.md
Title: Dedicated SQL pool Azure Advisor recommendations description: Learn about Synapse SQL recommendations and how they are generated -+ Last updated 06/26/2020-+
synapse-analytics Sql Data Warehouse Concept Resource Utilization Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md
Title: Manageability and monitoring - query activity, resource utilization description: Learn what capabilities are available to manage and monitor Azure Synapse Analytics. Use the Azure portal and Dynamic Management Views (DMVs) to understand query activity and resource utilization of your data warehouse. -+ Last updated 04/09/2020-+
synapse-analytics Sql Data Warehouse Continuous Integration And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-continuous-integration-and-deployment.md
Title: Continuous integration and deployment for dedicated SQL pool description: Enterprise-class Database DevOps experience for dedicated SQL pool in Azure Synapse Analytics with built-in support for continuous integration and deployment using Azure Pipelines. -+ Last updated 02/04/2020-+
synapse-analytics Sql Data Warehouse Get Started Create Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-create-support-ticket.md
Last updated 03/10/2020--++
synapse-analytics Sql Data Warehouse How To Monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache.md
Title: Optimize your Gen2 cache description: Learn how to monitor your Gen2 cache using the Azure portal. -+ Last updated 11/20/2020-+
synapse-analytics Sql Data Warehouse Install Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-install-visual-studio.md
description: Install Visual Studio and SQL Server Development Tools (SSDT) for S
-+ Last updated 05/11/2020-+
synapse-analytics Sql Data Warehouse Integrate Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics.md
Title: Use Azure Stream Analytics in dedicated SQL pool description: Tips for using Azure Stream Analytics with dedicated SQL pool in Azure Synapse for developing real-time solutions. -+ Last updated 9/25/2020-+
synapse-analytics Sql Data Warehouse Load From Azure Blob Storage With Polybase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md
Title: Load Contoso retail data to dedicated SQL pools description: Use PolyBase and T-SQL commands to load two tables from the Contoso retail data into dedicated SQL pools. -+ Last updated 11/20/2020-+
synapse-analytics Sql Data Warehouse Load From Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lake-store.md
Title: 'Tutorial load data from Azure Data Lake Storage' description: Use the COPY statement to load data from Azure Data Lake Storage for dedicated SQL pools. -+ Last updated 11/20/2020-+
synapse-analytics Sql Data Warehouse Memory Optimizations For Columnstore Compression https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-memory-optimizations-for-columnstore-compression.md
Title: Improve columnstore index performance for dedicated SQL pool description: Reduce memory requirements or increase the available memory to maximize the number of rows within each rowgroup in dedicated SQL pool. -+ Last updated 03/22/2019-+
synapse-analytics Sql Data Warehouse Monitor Workload Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md
Title: Monitor workload - Azure portal description: Monitor Synapse SQL using the Azure portal -+ Last updated 02/04/2020-+
synapse-analytics Sql Data Warehouse Overview Manageability Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manageability-monitoring.md
Title: Manageability and monitoring - overview description: Monitoring and manageability overview for resource utilization, log and query activity, recommendations, and data protection (backup and restore) with dedicated SQL pool in Azure Synapse Analytics. -+ Last updated 08/27/2018-+
synapse-analytics Sql Data Warehouse Partner Compatibility Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-compatibility-issues.md
-+ Last updated 11/18/2020
synapse-analytics Sql Data Warehouse Partner Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![Denodo](./media/sql-data-warehouse-partner-data-integration/denodo_logo.png) |**Denodo**<br>Denodo provide real-time access to data across an organization's diverse data sources. It uses data virtualization to bridge data across many sources without replication. Denodo offers broad access to structured and unstructured data residing in enterprise, big data, and cloud sources, in both batch and real time.|[Product page](https://www.denodo.com/en)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/denodo.denodo-platform-7_0-app-byol?tab=Overview)<br> | | ![Dimodelo](./media/sql-data-warehouse-partner-data-integration/dimodelo-logo.png) |**Dimodelo**<br>Dimodelo Data Warehouse Studio is a data warehouse automation tool for the Azure data platform. Dimodelo enhances developer productivity through a dedicated data warehouse modeling and ETL design tool, pattern-based best practice code generation, one-click deployment, and ETL orchestration. Dimodelo enhances maintainability with change propagation, allows developers to stay focused on business outcomes, and automates portability across data platforms.|[Product page](https://www.dimodelo.com/data-warehouse-studio-for-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dimodelosolutions.dimodeloazurevs)<br> | | ![Fivetran](./media/sql-data-warehouse-partner-data-integration/fivetran_logo.png) |**Fivetran**<br>Fivetran helps you centralize data from disparate sources. It features a zero maintenance, zero configuration data pipeline product with a growing list of built-in connectors to all the popular data sources. Setup takes five minutes after authenticating to data sources and target data warehouse.|[Product page](https://fivetran.com/)<br> |
-| ![HVR](./media/sql-data-warehouse-partner-data-integration/hvr-logo.png) |**HVR**<br>HVR provides a real-time cloud data replication solution that supports enterprise modernization efforts. The HVR platform is a reliable, secure, and scalable way to quickly and efficiently integrate large data volumes in complex environments, enabling real-time data updates, access, and analysis. Global market leaders in a variety of industries trust HVR to address their real-time data integration challenges and revolutionize their businesses. HVR is a privately held company based in San Francisco, with offices across North America, Europe, and Asia.|[Product page](https://www.hvr-software.com/solutions/azure-data-integration/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/hvr.hvr-for-azure?tab=Overview)<br>|
-| ![Incorta](./media/sql-data-warehouse-partner-data-integration/incorta-logo.png) |**Incorta**<br>Incorta enables organizations to go from raw data to quickly discovering actionable insights in Azure by automating the various data preparation steps typically required to analyze complex data. which. Leveraging a proprietary technology called Direct Data Mapping and IncortaΓÇÖs Blueprints (pre-built content library and best practices captured from real customer implementations), customers experience unprecedented speed and simplicity in accessing, organizing, and presenting data and insights for critical business decision-making.|[Product page](https://www.incorta.com/solutions/microsoft-azure-synapse)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/incorta.incorta?tab=Overview)<br>|
+| ![HVR](./media/sql-data-warehouse-partner-data-integration/hvr-logo.png) |**HVR**<br>HVR provides a real-time cloud data replication solution that supports enterprise modernization efforts. The HVR platform is a reliable, secure, and scalable way to quickly and efficiently integrate large data volumes in complex environments, enabling real-time data updates, access, and analysis. Global market leaders in various industries trust HVR to address their real-time data integration challenges and revolutionize their businesses. HVR is a privately held company based in San Francisco, with offices across North America, Europe, and Asia.|[Product page](https://www.hvr-software.com/solutions/azure-data-integration/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/hvr.hvr-for-azure?tab=Overview)<br>|
+| ![Incorta](./media/sql-data-warehouse-partner-data-integration/incorta-logo.png) |**Incorta**<br>Incorta enables organizations to go from raw data to quickly discovering actionable insights in Azure by automating the various data preparation steps typically required to analyze complex data. which. Using a proprietary technology called Direct Data Mapping and IncortaΓÇÖs Blueprints (pre-built content library and best practices captured from real customer implementations), customers experience unprecedented speed and simplicity in accessing, organizing, and presenting data and insights for critical business decision-making.|[Product page](https://www.incorta.com/solutions/microsoft-azure-synapse)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/incorta.incorta?tab=Overview)<br>|
| ![Informatica](./media/sql-data-warehouse-partner-data-integration/informatica_logo.png) |**1.Informatica Cloud Services for Azure**<br> Informatica Cloud offers a best-in-class solution for self-service data migration, integration, and management capabilities. Customers can quickly and reliably import, and export petabytes of data to Azure from different kinds of sources. Informatica Cloud Services for Azure provides native, high volume, high-performance connectivity to Azure Synapse, SQL Database, Blob Storage, Data Lake Store, and Azure Cosmos DB. <br><br> **2.Informatica PowerCenter** PowerCenter is a metadata-driven data integration platform that jumpstarts and accelerates data integration projects to deliver data to the business more quickly than manual hand coding. It serves as the foundation for your data integration investments |**Informatica Cloud services for Azure**<br>[Product page](https://www.informatica.com/products/cloud-integration.html)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.iics-winter)<br><br> **Informatica PowerCenter**<br>[Product page](https://www.informatica.com/products/data-integration/powercenter.html)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.powercenter/)<br>| | ![Information Builders](./medim) |
-| ![Loome](./media/sql-data-warehouse-partner-data-integration/loome-logo.png) |**Loome**<br>Loome provides a unique governance workbench that seamlessly integrates with Azure Synapse. It allows you to quickly onboard your data to the cloud and load your entire data source into ADLS in Parquet format. You can orchestrate data pipelines across data engineering, data science and HPC workloads, including native integration with Azure Data Factory, Python, SQL, Synapse Spark, and Databricks. Loome allows you to easily monitor Data Quality exceptions reinforcing Synapse as your strategic Data Quality Hub. Loome keeps an audit trail of resolved issues, and proactively manages data quality with a fully automated data quality engine generating audience targeted alerts in real-time.| [Product page](https://www.loomesoftware.com)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bizdataptyltd1592265042221.loome?tab=Overview) |
+| ![Loome](./media/sql-data-warehouse-partner-data-integration/loome-logo.png) |**Loome**<br>Loome provides a unique governance workbench that seamlessly integrates with Azure Synapse. It allows you to quickly onboard your data to the cloud and load your entire data source into ADLS in Parquet format. You can orchestrate data pipelines across data engineering, data science and HPC workloads, including native integration with Azure Data Factory, Python, SQL, Synapse Spark, and Databricks. Loome allows you to easily monitor Data Quality exceptions reinforcing Synapse as your strategic Data Quality Hub. Loome keeps an audit trail of resolved issues, and proactively manages data quality with a fully automated data quality engine generating audience targeted alerts in real time.| [Product page](https://www.loomesoftware.com)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bizdataptyltd1592265042221.loome?tab=Overview) |
| ![Lyftron](./media/sql-data-warehouse-partner-data-integration/lyftron-logo.png) |**Lyftron**<br>Lyftron modern data hub combines an effortless data hub with agile access to data sources. Lyftron eliminates traditional ETL/ELT bottlenecks with automatic data pipeline and make data instantly accessible to BI user with the modern cloud compute of Azure Synapse, Spark & Snowflake. Lyftron connectors automatically convert any source into normalized, ready-to-query relational format and replication. It offers advanced security, data governance and transformation, with simple ANSI SQL along with search capability on your enterprise data catalog.| [Product page](https://lyftron.com/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/lyftron.lyftronapp?tab=Overview) | | ![Matillion](./media/sql-data-warehouse-partner-data-integration/matillion-logo.png) |**Matillion**<br>Matillion is data transformation software for cloud data warehouses. Only Matillion is purpose-built for Azure Synapse enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Matillion products are highly rated and trusted by companies of all sizes to meet their data integration and transformation needs. Learn more about how you can unlock the potential of your data with Matillion's cloud-based approach to data transformation.| [Product page](https://www.matillion.com/technology/cloud-data-warehouse/microsoft-azure-synapse/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/matillion.matillion-etl-azure-synapse?tab=Overview) | | ![oh22 HEDDA.IO](./media/sql-data-warehouse-partner-data-integration/heddaiowhitebg-logo.png) |**oh22 HEDDA<span></span>.IO**<br>oh22ΓÇÖs HEDDA<span></span>.IO is a knowledge-driven data quality product built for Microsoft Azure. It enables you to build a knowledge base and use it to perform various critical data quality tasks, including correction, enrichment, and standardization of your data. HEDDA<span></span>.IO also allows you to do data cleansing by using cloud-based reference data services provided by reference data providers or developed and provided by you.| [Product page](https://github.com/oh22is/HEDDA.IO)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/oh22.hedda-io) |
synapse-analytics Sql Data Warehouse Partner Data Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-management.md
Title: Data management partners description: Lists of third-party data management partners with solutions that support Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Partner Machine Learning Ai https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-machine-learning-ai.md
Title: Machine learning and AI partners description: Lists of third-party machine learning and artificial intelligence partners with solutions that support Azure Synapse Analytics.-
This article highlights Microsoft partners with machine learning and artificial
| - | -- | -- | | ![Dataiku](./media/sql-data-warehouse-partner-machine-learning-and-ai/dataiku-logo.png) |**Dataiku**<br>Dataiku is the centralized data platform that moves businesses along their data journey from analytics at scale to Enterprise AI, powering self-service analytics while also ensuring the operationalization of machine learning models in production. |[Product page](https://www.dataiku.com/partners/microsoft/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dataiku.dataiku-data-science-studio)<br> | | ![MATLAB](./media/sql-data-warehouse-partner-machine-learning-and-ai/mathworks-logo.png) |**Matlab**<br>MATLAB® is a programming platform designed for engineers and scientists. It combines a desktop environment tuned for iterative analysis and design processes with a programming language that expresses matrix and array mathematics directly. Millions worldwide use MATLAB for a range of applications, including machine learning, deep learning, signal and image processing, control systems, and computational finance. |[Product page](https://www.mathworks.com/products/database.html)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/mathworks-inc.matlab-byol?tab=Overview)<br> |
-| ![Qubole](./media/sql-data-warehouse-partner-data-integration/qubole_logo.png) |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that leverage powerful open-source engines.|[Product page](https://www.qubole.com/company/partners/partners-microsoft-azure/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qubole-inc.qubole-data-service?tab=Overview)
+| ![Qubole](./media/sql-data-warehouse-partner-data-integration/qubole_logo.png) |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that use powerful open-source engines.|[Product page](https://www.qubole.com/company/partners/partners-microsoft-azure/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qubole-inc.qubole-data-service?tab=Overview)
## Next steps
-To learn more about other partners, see [Business Intelligence partners](sql-data-warehouse-partner-business-intelligence.md), [Data Integration partners](sql-data-warehouse-partner-data-integration.md) and [Data Management partners](sql-data-warehouse-partner-data-management.md).
+To learn more about other partners, see [Business Intelligence partners](sql-data-warehouse-partner-business-intelligence.md), [Data Integration partners](sql-data-warehouse-partner-data-integration.md), and [Data Management partners](sql-data-warehouse-partner-data-management.md).
synapse-analytics Sql Data Warehouse Partner System Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-system-integration.md
-+ Last updated 11/24/2020
This article highlights Microsoft system integration partner companies building
| ![Accenture](./media/sql-data-warehouse-partner-public-preview/accenture-logo.png) |**Accenture**<br>Bringing together 45,000+ dedicated professionals, the Accenture Microsoft Business GroupΓÇöpowered by AvanadeΓÇöhelps enterprises to thrive in the era of digital disruption.|[Partner page](https://www.accenture.com/us-en/services/microsoft-index)<br>| | ![Adatis](./media/sql-data-warehouse-partner-public-preview/adatis-logo.png) |**Adatis**<br>Adatis offers services that specialize in advanced data analytics, from data strategy and consultancy, to world class delivery and managed services. |[Partner page](https://adatis.co.uk/)<br> | | ![Blue Granite](./media/sql-data-warehouse-partner-public-preview/blue-granite-logo.png) |**Blue Granite**<br>The BlueGranite Catalyst for Analytics is an engagement approach that features their ΓÇ£think big, but start smallΓÇ¥ philosophy. Starting with collaborative envisioning and strategy sessions, Blue Granite work with clients to discover, create, and realize the value of new modern data and analytics solutions, using the latest technologies on the Microsoft platform.|[Partner page](https://www.blue-granite.com/)<br>|
-| ![Capax Global](./media/sql-data-warehouse-partner-public-preview/capax-global-logo.png) |**Capax Global**<br>We improve your business by making better use of information you already have, building custom solutions that align to your business goals, and set you up for long-term success. We combine well-established patterns and practices with technology while leveraging our team's wide range of industry and commercial software development experience. We share a passion for technology, innovation, and client satisfaction. Our pride for what we do drives the success of our projects and is fundamental to why people partner with us.|[Partner page](https://www.capaxglobal.com/)<br>|
-| ![Coeo](./media/sql-data-warehouse-partner-public-preview/coeo-logo.png) |**Coeo**<br>CoeoΓÇÖs team includes cloud consultants with deep expertise in Azure databases, and BI consultants dedicated to provide flexible and scalable analytical solutions. Coeo can help you move to a hybrid or full Azure solution.|[Partner page](https://www.coeo.com/solution/technology/microsoft-azure/)<br>|
-| ![Cognizant](./media/sql-data-warehouse-partner-public-preview/cognizant-logo.png) |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant leverages its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Partner page](https://www.cognizant.com/partners/microsoftazure)<br>|
-| ![Neal Analytics](./media/sql-data-warehouse-partner-public-preview/neal-analytics-logo.png) |**Neal Analytics**<br>Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we leverage data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Cognitive Services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries.|[Partner page](https://nealanalytics.com/)<br>|
+| ![Capax Global](./media/sql-data-warehouse-partner-public-preview/capax-global-logo.png) |**Capax Global**<br>We improve your business by making better use of information you already have. Building custom solutions that align to your business goals, and setting you up for long-term success. We combine well-established patterns and practices with technology while using our team's wide range of industry and commercial software development experience. We share a passion for technology, innovation, and client satisfaction. Our pride for what we do drives the success of our projects and is fundamental to why people partner with us.|[Partner page](https://www.capaxglobal.com/)<br>|
+| ![Coeo](./media/sql-data-warehouse-partner-public-preview/coeo-logo.png) |**Coeo**<br>CoeoΓÇÖs team includes cloud consultants with deep expertise in Azure databases, and BI consultants dedicated to providing flexible and scalable analytic solutions. Coeo can help you move to a hybrid or full Azure solution.|[Partner page](https://www.coeo.com/solution/technology/microsoft-azure/)<br>|
+| ![Cognizant](./media/sql-data-warehouse-partner-public-preview/cognizant-logo.png) |**Cognizant**<br>As a Microsoft strategic partner, Cognizant has the consulting skills and experience to help customers make the journey to the cloud. For each client project, Cognizant uses its strong partnership with Microsoft to maximize customer benefits from the Azure architecture.|[Partner page](https://www.cognizant.com/partners/microsoftazure)<br>|
+| ![Neal Analytics](./media/sql-data-warehouse-partner-public-preview/neal-analytics-logo.png) |**Neal Analytics**<br>Neal Analytics helps companies navigate their digital transformation journey in converting data into valuable assets and a competitive advantage. With our machine learning and data engineering expertise, we use data to drive margin increases and profitable analytics projects. Comprised of consultants specializing in Data Science, Business Intelligence, Cognitive Services, practical AI, Data Management, and IoT, Neal Analytics is trusted to solve unique business problems and optimize operations across industries.|[Partner page](https://nealanalytics.com/)<br>|
| ![Pragmatic Works](./media/sql-data-warehouse-partner-public-preview/pragmatic-works-logo.png) |**Pragmatic Works**<br>Pragmatic Works can help you capitalize on the value of your data by empowering more users and applications on the same dataset. We kickstart, accelerate, and maintain your cloud environment with a range of solutions that fit your business needs.|[Partner page](https://www.pragmaticworks.com/)<br>| ## Next Steps
synapse-analytics Sql Data Warehouse Query Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-query-visual-studio.md
Title: Connect to dedicated SQL pool (formerly SQL DW) with VSTS description: Query dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics with Visual Studio. -+ Last updated 08/15/2019-+
synapse-analytics Sql Data Warehouse Reference Powershell Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-powershell-cmdlets.md
Title: PowerShell & REST APIs for dedicated SQL pool (formerly SQL DW) description: Top PowerShell cmdlets for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics including how to pause and resume a database. -+ Last updated 04/17/2018-+
synapse-analytics Sql Data Warehouse Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-source-control-integration.md
Title: Source Control Integration description: Enterprise-class Database DevOps experience for dedicated SQL pool with native source control integration using Azure Repos (Git and GitHub). -+ Last updated 08/23/2019-+ # Source Control Integration for dedicated SQL pool in Azure Synapse Analytics
synapse-analytics Sql Data Warehouse Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot.md
Title: Troubleshooting dedicated SQL pool (formerly SQL DW) description: Troubleshooting dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -+ Last updated 11/13/2020-+
synapse-analytics Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/connect-overview.md
Title: Connect to Synapse SQL description: Get connected to Synapse SQL.- -+ Last updated 04/15/2020
synapse-analytics Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/connection-strings.md
Title: Connection strings for Synapse SQL description: Connection strings for Synapse SQL- -+ Last updated 04/15/2020
synapse-analytics Data Load Columnstore Compression https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/data-load-columnstore-compression.md
Title: Improve columnstore index performance description: Reduce memory requirements or increase the available memory to maximize the number of rows a columnstore index compresses into each rowgroup. -+ Last updated 04/15/2020-+
synapse-analytics Data Loading Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/data-loading-best-practices.md
Title: Data loading best practices description: Recommendations and performance optimizations for loading data into a dedicated SQL pool Azure Synapse Analytics. -+ Last updated 04/15/2020-+
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-openrowset.md
You can instruct serverless SQL pool to traverse folders by specifying /* at the
`https://sqlondemandstorage.blob.core.windows.net/csv/population/**` > [!NOTE]
-> Unlike Hadoop and PolyBase, serverless SQL pool doesn't return subfolders unless you specify /** at the end of path.
+> Unlike Hadoop and PolyBase, serverless SQL pool doesn't return subfolders unless you specify /** at the end of path. Just like Hadoop and PolyBase, it doesn't return files for which the file name begins with an underline (_) or a period (.).
In the example below, if the unstructured_data_path=`https://mystorageaccount.dfs.core.windows.net/webdata/`, a serverless SQL pool query will return rows from mydata.txt. It won't return mydata2.txt and mydata3.txt because they're located in a subfolder.
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-external-tables.md
Specifies the folder or the file path and file name for the actual data in Azure
If you specify a folder LOCATION, a serverless SQL pool query will select from the external table and retrieve files from the folder. > [!NOTE]
-> Unlike Hadoop and PolyBase, serverless SQL pool doesn't return subfolders unless you specify /** at the end of path.
+> Unlike Hadoop and PolyBase, serverless SQL pool doesn't return subfolders unless you specify /** at the end of path. Just like Hadoop and PolyBase, it doesn't return files for which the file name begins with an underline (_) or a period (.).
In this example, if LOCATION='/webdata/', a serverless SQL pool query, will return rows from mydata.txt. It won't return mydata2.txt and mydata3.txt because they're located in a subfolder.
synapse-analytics Get Started Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/get-started-azure-data-studio.md
-+ Last updated 04/15/2020
synapse-analytics Get Started Connect Sqlcmd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/get-started-connect-sqlcmd.md
-+ Last updated 04/15/2020
synapse-analytics Get Started Ssms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/get-started-ssms.md
-+ Last updated 04/15/2020
synapse-analytics Load Data Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/load-data-overview.md
Title: Design a PolyBase data loading strategy for dedicated SQL pool description: Instead of ETL, design an Extract, Load, and Transform (ELT) process for loading data with dedicated SQL. -+ Last updated 04/15/2020-+
synapse-analytics Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-architecture.md
-+ Last updated 04/15/2020
synapse-analytics Troubleshoot Synapse Studio And Storage Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio-and-storage-connectivity.md
Title: Troubleshoot connectivity between Synapse Studio and storage
description: Troubleshoot connectivity between Synapse Studio and storage + Last updated 11/11/2020
synapse-analytics Troubleshoot Synapse Studio Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio-powershell.md
Title: Troubleshoot Synapse Studio connectivity description: Troubleshoot Azure Synapse Studio connectivity using PowerShell -++ Last updated 10/30/2020
virtual-machines Disks Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-redundancy.md
For details see the [Azure pricing page](https://azure.microsoft.com/pricing/det
### Comparison with other disk types
-Except for more write latency, disks using ZRS are identical to disks using LRS. They have the same performance targets.
+Except for more write latency, disks using ZRS are identical to disks using LRS. They have the same performance targets. We recommend you to conduct [disk-benchmarking](disks-benchmarks.md) to simulate the workload of your application for comparing the latency between the LRS and ZRS disks.
### Create ZRS managed disks
New-AzResourceGroupDeployment -ResourceGroupName zrstesting `
## Next steps -- Use these sample [Azure Resource Manager templates to create a VM with ZRS disks](https://github.com/Azure-Samples/managed-disks-powershell-getting-started/tree/master/ZRSDisks).
+- Use these sample [Azure Resource Manager templates to create a VM with ZRS disks](https://github.com/Azure-Samples/managed-disks-powershell-getting-started/tree/master/ZRSDisks).
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
A: Yes, you can. You can use the license type of `RHEL_BYOS` for RHEL VMs and `S
*Q: Can I use Azure Hybrid Benefit on virtual machine scale sets for RHEL and SLES?*
-A: Yes, Azure Hybrid Benefit on virtual machine scale sets for RHEL and SLES is in preview. You can [learn more about this benefit and how to use it here](https://docs.microsoft.com/azure/virtual-machine-scale-sets/azure-hybrid-benefit-linux-vmss).
+A: Yes, Azure Hybrid Benefit on virtual machine scale sets for RHEL and SLES is in preview. You can [learn more about this benefit and how to use it here](/azure/virtual-machine-scale-sets/azure-hybrid-benefit-linux).
*Q: Can I use Azure Hybrid Benefit on reserved instances for RHEL and SLES?*
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/download-vhd.md
In this article, you learn how to download a Linux virtual hard disk (VHD) file
## Stop the VM
-A VHD canΓÇÖt be downloaded from Azure if it's attached to a running VM. You need to stop the VM to download the VHD.
+A VHD canΓÇÖt be downloaded from Azure if it's attached to a running VM. If you want to keep the VM running, you can [create a snapshot and then download the snapshot](#alternative-snapshot-the-vm-disk).
+
+To stop the VM:
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. On the left menu, select **Virtual Machines**.
A VHD canΓÇÖt be downloaded from Azure if it's attached to a running VM. You nee
:::image type="content" source="./media/download-vhd/export-stop.PNG" alt-text="Shows the menu button to stop the VM.":::
+### Alternative: Snapshot the VM disk
+
+Take a snapshot of the disk to download.
+
+1. Select the VM in the [portal](https://portal.azure.com).
+2. Select **Disks** in the left menu and then select the disk you want to snapshot. The details of the disk will be displayed.
+3. Select **Create Snapshot** from the menu at the top of the page. The **Create snapshot** page will open.
+4. In **Name**, type a name for the snapshot.
+5. For **Snapshot type**, select **Full** or **Incremental**.
+6. When you are done, select **Review + create**.
+
+Your snapshot will be created shortly, and may then be used to download or create another VM from.
+
+> [!NOTE]
+> If you don't stop the VM first, the snapshot will not be clean. The snapshot will be in the same state as if the VM had been power cycled or crashed at the point in time when the snapshot was made. While usually safe, it could cause problems if the running applications running a the time were not crash resistant.
+>
+> This method is only recommended for VMs with a single OS disk. VMs with one or more data disks should be stopped before download or before creating a snapshot for the OS disk and each data disk.
+ ## Generate SAS URL To download the VHD file, you need to generate a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md?toc=/azure/virtual-machines/windows/toc.json) URL. When the URL is generated, an expiration time is assigned to the URL.
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/instance-metadata-service.md
Previously updated : 02/21/2021 Last updated : 04/16/2021
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/n-series-driver-setup.md
sudo reboot
``` The installation can take several minutes.
+
+ > [!NOTE]
+ > Visit [Fedora](https://dl.fedoraproject.org/pub/epel/) and [Nvidia CUDA repo](https://developer.download.nvidia.com/compute/cuda/repos/) to pick the correct package for the CentOS or RHEL version you want to use.
+ >
+
+For example, CentOS 8 and RHEL 8 will need the following steps.
+
+ ```bash
+ sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
+ sudo yum install dkms
+
+ CUDA_REPO_PKG=cuda-repo-rhel8-10.2.89-1.x86_64.rpm
+ wget https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/${CUDA_REPO_PKG} -O /tmp/${CUDA_REPO_PKG}
+
+ sudo rpm -ivh /tmp/${CUDA_REPO_PKG}
+ rm -f /tmp/${CUDA_REPO_PKG}
+
+ sudo yum install cuda-drivers
+ ```
4. To optionally install the complete CUDA toolkit, type:
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/trusted-launch-portal.md
Create a virtual machine with trusted launch enabled.
5. Under **Project details**, make sure the correct subscription is selected. 6. Under **Resource group**, select **Create new** and type a name for your resource group or select an existing resource group from the dropdown. 7. Under **Instance details**, type a name for the virtual machine name and choose a region that supports [trusted launch](trusted-launch.md#public-preview-limitations).
-8. Under **Image**, select a Gen 2 [image that supports trusted launch](trusted-launch.md#public-preview-limitations).
+8. Under **Image**, select a Gen 2 [image that supports trusted launch](trusted-launch.md#public-preview-limitations). Make sure you see the following message: **This image supports trusted launch preview. Configure in the Advanced tab**.
> [!TIP] > If you don't see the Gen 2 version of the image you want in the drop-down, select **See all images** and then change the **VM Generation** filter to only show Gen 2 images. Find the image in the list, then use the **Select** drop-down to select the Gen 2 version.
-
+
+ :::image type="content" source="media/trusted-launch/gen-2-image.png" alt-text="Screenshot showing the message confirming that this is a gen2 image that supports trusted launch.":::
+
+13. Select a VM size that supports trusted launch. See the list of [supported sizes](trusted-launch.md#public-preview-limitations).
+14. Fill in the **Administrator account** information and then **Inbound port rules**.
1. Switch over to the **Advanced** tab by selecting it at the top of the page. 1. Scroll down to the **VM generation** section. Make sure **Gen 2** is selected. 1. While still on the **Advanced** tab, scroll down to **Trusted launch**, and then select the **Trusted launch** checkbox. This will make two more options appear - Secure boot and vTPM. Select the appropriate options for your deployment. :::image type="content" source="media/trusted-launch/trusted-launch-portal.png" alt-text="Screenshot showing the options for trusted launch.":::
-12. Go back to the **Basics** tab, under **Image**, and make sure you see the following message: **This image supports trusted launch preview. Configure in the Advanced tab**. The gen 2 image should now be selected.
-
- :::image type="content" source="media/trusted-launch/gen-2-image.png" alt-text="Screenshot showing the message confirming that this is a gen2 image that supports trusted launch.":::
-
-13. Select a VM size that supports trusted launch. See the list of [supported sizes](trusted-launch.md#public-preview-limitations).
-14. Fill in the **Administrator account** information and then **Inbound port rules**.
15. At the bottom of the page, select **Review + Create** 16. On the **Create a virtual machine** page, you can see the details about the VM you are about to deploy. When you are ready, select **Create**.
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/download-vhd.md
In this article, you learn how to download a Windows virtual hard disk (VHD) fil
## Optional: Generalize the VM
-If you want to use the VHD as an [image](tutorial-custom-images.md) to create other VMs, you should use [Sysprep](/windows-hardware/manufacture/desktop/sysprep--generalize--a-windows-installation) to generalize the operating system.
+If you want to use the VHD as an [image](tutorial-custom-images.md) to create other VMs, you should use [Sysprep](/windows-hardware/manufacture/desktop/sysprep--generalize--a-windows-installation) to generalize the operating system. Otherwise, you will have to make a copy the disk for each VM you want to create.
To use the VHD as an image to create other VMs, generalize the VM.
To use the VHD as an image to create other VMs, generalize the VM.
5. In the System Preparation Tool dialog box, select **Enter System Out-of-Box Experience (OOBE)**, and make sure that **Generalize** is selected. 6. In Shutdown Options, select **Shutdown**, and then click **OK**.
+If you don't want to generalize your current VM, you can still create a generalized image by first [making a snapshot of the OS disk](#alternative-snapshot-the-vm-disk), creating a new VM from the snapshot, and then generalizing the copy.
## Stop the VM
-A VHD canΓÇÖt be downloaded from Azure if it's attached to a running VM. You need to stop the VM to download a VHD.
+A VHD canΓÇÖt be downloaded from Azure if it's attached to a running VM. If you want to keep the VM running, you can [create a snapshot and then download the snapshot](#alternative-snapshot-the-vm-disk).
1. On the Hub menu in the Azure portal, click **Virtual Machines**. 1. Select the VM from the list. 1. On the blade for the VM, click **Stop**.
+### Alternative: Snapshot the VM disk
+
+Take a snapshot of the disk to download.
+
+1. Select the VM in the [portal](https://portal.azure.com).
+2. Select **Disks** in the left menu and then select the disk you want to snapshot. The details of the disk will be displayed.
+3. Select **Create Snapshot** from the menu at the top of the page. The **Create snapshot** page will open.
+4. In **Name**, type a name for the snapshot.
+5. For **Snapshot type**, select **Full** or **Incremental**.
+6. When you are done, select **Review + create**.
+
+Your snapshot will be created shortly, and may then be used to download or create another VM from.
+
+> [!NOTE]
+> If you don't stop the VM first, the snapshot will not be clean. The snapshot will be in the same state as if the VM had been power cycled or crashed at the point in time when the snapshot was made. While usually safe, it could cause problems if the running applications running a the time were not crash resistant.
+>
+> This method is only recommended for VMs with a single OS disk. VMs with one or more data disks should be stopped before download or before creating a snapshot for the OS disk and each data disk.
## Generate download URL
To download the VHD file, you need to generate a [shared access signature (SAS)]
1. On the page for the VM, click **Disks** in the left menu. 1. Select the operating system disk for the VM. 1. On the page for the disk, select **Disk Export** from the left menu.
-1. The default expiration time of the URL is *3600* seconds. Increase this to **36000** for Windows OS disks.
+1. The default expiration time of the URL is *3600* seconds (one hour). You may need to increase this for Windows OS disks or large data disks. **36000** seconds (10 hours) is usually sufficient.
1. Click **Generate URL**. > [!NOTE]
-> The expiration time is increased from the default to provide enough time to download the large VHD file for a Windows Server operating system. You can expect a VHD file that contains the Windows Server operating system to take several hours to download depending on your connection. If you are downloading a VHD for a data disk, the default time is sufficient.
+> The expiration time is increased from the default to provide enough time to download the large VHD file for a Windows Server operating system. Large VHDs can take up to several hours to download depending on your connection and the size of the VM.
> >
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/instance-metadata-service.md
Previously updated : 02/21/2021 Last updated : 04/16/2021
virtual-machines Azure Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/azure-monitor-overview.md
It is highly recommended that customers enable data sharing, as it gives Microso
At a high level, the following diagram explains how Azure Monitor for SAP Solutions collects telemetry from SAP HANA database. The architecture is agnostic to whether SAP HANA is deployed on Azure Virtual Machines or Azure Large Instances.
-![Azure Monitor for SAP solutions architecture](./media/azure-monitor-sap/azure-monitor-architecture.png)
+![Azure Monitor for SAP solutions architecture](https://user-images.githubusercontent.com/75772258/115046700-62ff3280-9ef5-11eb-8d0d-cfcda526aeeb.png)
The key components of the architecture are: - Azure portal ΓÇô the starting point for customers. Customers can navigate to marketplace within Azure portal and discover Azure Monitor for SAP Solutions
virtual-machines Azure Monitor Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/azure-monitor-providers.md
For public preview, the following provider types are supported:
- High-availability cluster - Operating System
-![Azure Monitor for SAP solutions providers](./media/azure-monitor-sap/azure-monitor-providers.png)
+![Azure Monitor for SAP solutions providers](https://user-images.githubusercontent.com/75772258/115047655-5a5b2c00-9ef6-11eb-9e0c-073e5e1fcd0e.png)
Customers are recommended to configure at least one provider from the available provider types at the time of deploying the SAP Monitor resource. By configuring a provider, customers initiate data collection from the corresponding component for which the provider is configured.