Updates from: 06/03/2021 03:03:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/service-limits.md
Previously updated : 05/12/2021 Last updated : 06/02/2021
This article contains the usage constraints and other service limits for the Azu
## End user/consumption related limits
-The following end-user related service limits apply to all authentication requests to Azure AD B2C. The table below illustrates the **peak** token issuances for default user flow and custom policy configurations.
+The following end-user related service limits apply to all authentication and authorization protocols supported by Azure AD B2C, including SAML, Open ID Connect, OAuth2, and ROPC.
-|User Journey | Limit |
+|Category |Limit |
|||
-|Combined sign up and sign in | 2,400/min |
-|Sign up | 1,200/min |
-|Sign in | 2,400/min |
-|Password reset | 1,200/min |
-|Profile edit | 2,400/min |
-|ROPC | 10,000/min |
-|||
+|Number of requests per IP address per Azure AD B2C tenant |6,000/5min |
+|Total number of requests per Azure AD B2C tenant |12,000/min |
-|Category | Limit |
-|||
-|Tokens issued per IP address per Azure AD B2C tenant |240/min  |
-|||
+The number of requests can vary depending on the number of directory reads and writes that occur during the Azure AD B2C user journey. For example, a simple sign-in journey that reads from the directory consists of 1 request. If the sign-in journey must also update the directory, this operation is counted as an additional request.
## Azure AD B2C configuration limits The following table lists the administrative configuration limits in the Azure AD B2C service.
-|Category |Type |Limit |
-||||
-|Maximum string length per attribute |User|250 Chars |
-|Maximum number of [`Identities`](user-profile-attributes.md#identities-attribute) in a user create operation | User|7 |
-|Number of scopes per applicationΓÇ» |Application|1000 |
-|Number of [custom attributes](user-profile-attributes.md#extension-attributes) per user <sup>1</sup> |Application|100 |
-|Number of redirect URLs per application |Application|100 |
-|Number of sign-out URLs per application  |Application|1  |
-|Levels of policy [inheritance](custom-policy-overview.md#inheritance-model) |Custom policy|10 |
-|Maximum policy file size |Custom policy|400 KB |
-|Number of B2C tenants per subscription |Azure Subscription|20 |
-|Number of policies per Azure AD B2C tenant | Tenant|200 |
+|Category |Limit |
+|||
+|Number of scopes per applicationΓÇ» |1000 |
+|Number of [custom attributes](user-profile-attributes.md#extension-attributes) per user <sup>1</sup> |100 |
+|Number of redirect URLs per application |100 |
+|Number of sign out URLs per applicationΓÇ» |1 |
+|String Limit per Attribute |250 Chars |
+|Number of B2C tenants per subscription |20 |
+|Levels of [inheritance](custom-policy-overview.md#inheritance-model) in custom policies |10 |
+|Number of policies per Azure AD B2C tenant |200 |
+|Maximum policy file size |400 KB |
<sup>1</sup> See also [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md).
The following table lists the administrative configuration limits in the Azure A
- Learn about [Microsoft GraphΓÇÖs throttling guidance](/graph/throttling) - Learn about the [validation differences for Azure AD B2C applications](../active-directory/develop/supported-accounts-validation.md)-------------
active-directory Developer Guide Conditional Access Authentication Context https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md
The following are the prerequisites and the steps if you want to use Conditional
Once your application is integrated using the supported authentication protocols and registered in an Azure AD tenant that has the Conditional Access feature available for use, you can kick start the process to integrating this feature in your applications that sign-in users.
-**First**, declare and make the authentication contexts available in your tenant. For more information, see [Configure authentication contexts](../conditional-access/concept-conditional-access-cloud-apps.md#configure-authentication-contexts)
+> [!NOTE]
+> A detailed walkthrough of this feature is also available as a recorded session at [Use Conditional Access Auth Context in your app for step\-up authentication](https://www.youtube.com/watch?v=_iO7CfoktTY).
+
+**First**, declare and make the authentication contexts available in your tenant. For more information, see [Configure authentication contexts](../conditional-access/concept-conditional-access-cloud-apps.md#configure-authentication-contexts).
Values **C1-C25** are available for use as **Auth Context IDs** in a tenant. Examples of auth context may be:
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/publisher-verification-overview.md
Previously updated : 05/19/2020 Last updated : 06/01/2021
Publisher verification provides the following benefits:
- **Smoother enterprise adoption**- admins can configure [user consent policies](../manage-apps/configure-user-consent.md), with publisher verification status as one of the primary policy criteria. > [!NOTE]
-> Starting in November 2020, end-users will no longer be able to grant consent to most newly registered multi-tenant apps without verified publishers. This will apply to apps that are registered after November 8th 2020, use OAuth2.0 to request permissions beyond basic sign-in and read user profile, and request consent from users in different tenants than the one the app is registered in. A warning will be displayed on the consent screen informing users that these apps are risky and are from unverified publishers.
+> Starting in November 2020, end-users will no longer be able to grant consent to most newly registered multi-tenant apps without verified publishers if [risk-based step-up consent](/azure/active-directory/manage-apps/configure-user-consent#risk-based-step-up-consent) is enabled. This will apply to apps that are registered after November 8th 2020, use OAuth2.0 to request permissions beyond basic sign-in and read user profile, and request consent from users in different tenants than the one the app is registered in. A warning will be displayed on the consent screen informing users that these apps are risky and are from unverified publishers.
## Requirements There are a few pre-requisites for publisher verification, some of which will have already been completed by many Microsoft partners. They are:
active-directory Secure Least Privileged Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/secure-least-privileged-access.md
+
+ Title: Best practices for least privileged access on Azure AD - Microsoft identity platform
+description: Learn about a set of best practices and general guidance for least privilege.
+++
+
++++ Last updated : 04/26/2021++++
+#Customer intent: As a developer, I want to learn how to stay least privileged and require just enough permissions for my application.
++
+# Best practices for least privileged access for applications
+
+The principle of least privilege is an information security concept, which enforces the idea that users and applications should be granted the minimum level of access needed to perform required tasks. Understanding the principle of least privilege helps you build trustworthy applications for your customers.
+
+Least privilege adoption is more than just a good security practice. The concept helps you preserve the integrity and security of your data. It also protects the privacy of your data and reduces risks by preventing applications from having access to data any more than absolutely needed. Looked at on a broader level, the adoption of the least privilege principle is one of the ways organizations can embrace proactive security with [Zero Trust](https://www.microsoft.com/security/business/zero-trust).
+
+This article describes a set of best practices that you can use to adopt the least privilege principle to make your applications more secure for end users. You'll get to understand the following aspects of least privilege:
+- How consent works with permissions
+- What it means for an app to be overprivileged or least privileged
+- How to approach least privilege as a developer
+- How to approach least privilege as an organization
+
+## Using consent to control access permissions to data
+
+Access to protected data requires [consent](../develop/application-consent-experience.md#consent-and-permissions) from the end user. Whenever an application that runs in your user's device requests access to protected data, the app should ask for the user's consent before granting access to the protected data. The end user is required to grant (or deny) consent for the requested permission before the application can progress. As an application developer, it's best to request access permission with the least privilege.
++
+## Overprivileged and least privileged applications
+
+An overprivileged application may have one of the following characteristics:
+- **Unused permissions**: An application could end up with unused permissions when it fails to make API calls that utilize all the permissions granted to it. For example in [MS Graph](/graph/overview), an app might only be reading OneDrive files (using the "*Files.Read.All*" permission) but has also been granted ΓÇ£*Calendars.Read*ΓÇ¥ permission, despite not integrating with any Calendar APIs.
+- **Reducible permissions**: An app has reducible permission when the granted permission has a lesser privileged replacement that can complete the desired API call. For example, an app that is only reading User profiles, but has been granted "*User.ReadWrite.All*" might be considered overprivileged. In this case, the app should be granted "*User.Read.All*" instead, which is the least privileged permission needed to satisfy the request.
+
+For an application to be considered as least privileged, it should have:
+- **Just enough permissions**: Grant only the minimum set of permissions required by an end user of an application, service, or system to perform the required tasks.
+
+## Approaching least privilege as an application developer
+
+As a developer, you have a responsibility to contribute to the security of your customer's data. When developing your applications, you need to adopt the principle of least privilege. We recommend that you follow these steps to prevent your application from being overprivileged:
+- Fully understand the permissions required for the API calls that your application needs to make
+- Understand the least privileged permission for each API call that your app needs to make using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer)
+- Find the corresponding [permissions](/graph/permissions-reference) from least to most privileged
+- Remove any duplicate sets of permissions in cases where your app makes API calls that have overlapping permissions
+- Apply only the least privileged set of permissions to your application by choosing the least privileged permission in the permission list
+
+## Approaching least privilege as an organization
+
+Organizations often hesitate to modify existing applications as it might affect business operations, but that presents a challenge when already granted permissions are overprivileged and need to be revoked. As an organization, it's good practice to check and review your permissions regularly. We recommend you follow these steps to make your applications stay healthy:
+- Evaluate the API calls being made from your applications
+- Use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) and the [Microsoft Graph](/graph/overview) documentation for the required and least privileged permissions
+- Audit privileges that are granted to users or applications
+- Update your applications with the least privileged permission set
+- Conduct permissions review regularly to make sure all authorized permissions are still relevant
+
+## Next steps
+
+- For more information on consent and permissions in Azure Active Directory, see [Understanding Azure AD application consent experiences](../develop/application-consent-experience.md).
+- For more information on permissions and consent in Microsoft identity, see [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md).
+- For more information on Zero Trust, see [Zero Trust Deployment Center](/security/zero-trust/).
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/troubleshoot-publisher-verification.md
The error message displayed is: "A verified publisher cannot be added to this ap
First, verify you've met the [publisher verification requirements](publisher-verification-overview.md#requirements).
-When a request to add a verified publisher is made, an number of signals are used to make a security risk assessment. If the request is determined to be risky an error will be returned. For security reasons, Microsoft does not disclose the specific criteria used to determine whether a request is risky or not.
+When a request to add a verified publisher is made, a number of signals are used to make a security risk assessment. If the request is determined to be risky an error will be returned. For security reasons, Microsoft does not disclose the specific criteria used to determine whether a request is risky or not.
## Next steps
If you have reviewed all of the previous information and are still receiving an
- TenantId where app is registered - MPN ID - REST request being made -- Error code and message being returned
+- Error code and message being returned
active-directory Howto Device Identity Virtual Desktop Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-device-identity-virtual-desktop-infrastructure.md
Persistent versions use a unique desktop image for each user or a pool of users.
Non-persistent versions use a collection of desktops that users can access on an as needed basis. These non-persistent desktops are reverted to their original state, in case of Windows current<sup>1</sup> this happens when a virtual machine goes through a shutdown/restart/OS reset process and in case of Windows down-level<sup>2</sup> this happens when a user signs out.
-There has been a rise in non-persistent VDI deployments as remote work continues to be the new norm. As customers deploy non-persistent VDI, it is important to ensure that you manage device churn that could be caused due to frequent device registration without having a proper strategy for device lifecycle management.
+There has been a rise in non-persistent VDI deployments as remote work continues to be the new norm. As customers deploy non-persistent VDI, it is important to ensure that you manage stale devices that are created as a result of frequent device registration without having a proper strategy for device lifecycle management.
> [!IMPORTANT]
-> Failure to manage device churn, can lead to pressure increase on your tenant quota usage consumption and potential risk of service interruption, if you run out of tenant quota. You should follow the guidance documented below when deploying non persistent VDI environments to avoid this situation.
+> Failure to manage stale devices can lead to pressure increase on your tenant quota usage consumption and potential risk of service interruption, if you run out of tenant quota. You should follow the guidance documented below when deploying non persistent VDI environments to avoid this situation.
This article will cover Microsoft's guidance to administrators on support for device identity and VDI. For more information about device identity, see the article [What is a device identity](overview.md).
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
Previously updated : 01/14/2020 Last updated : 06/01/2021
-# Restrict guest access permissions (preview) in Azure Active Directory
+# Restrict guest access permissions in Azure Active Directory
-Azure Active Directory (Azure AD) allows you to restrict what external guest users can see in their organization in Azure AD. Guest users are set to a limited permission level by default in Azure AD, while the default for member users is the full set of default user permissions. This is a preview of a new guest user permission level in your Azure AD organization's external collaboration settings for even more restricted access, so your guest access choices now are:
+Azure Active Directory (Azure AD) allows you to restrict what external guest users can see in their organization in Azure AD. Guest users are set to a limited permission level by default in Azure AD, while the default for member users is the full set of user permissions. This is a new guest user permission level in your Azure AD organization's external collaboration settings for even more restricted access, so your guest access levels are:
Permission level | Access level | Value - | | --
active-directory Active Directory Groups Membership Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-groups-membership-azure-portal.md
This article helps you to add and remove a group from another group using Azure
You can add an existing Security group to another existing Security group (also known as nested groups), creating a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time. >[!Important]
->We don't currently support:<ul><li>Adding groups to a group synced with on-premises Active Directory.</li><li>Adding Security groups to Microsoft 365 groups.</li><li>Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.</li><li>Assigning apps to nested groups.</li><li>Applying licenses to nested groups.</li><li>Adding distribution groups in nesting scenarios.</li></ul>
+>We don't currently support:<ul><li>Adding groups to a group synced with on-premises Active Directory.</li><li>Adding Security groups to Microsoft 365 groups.</li><li>Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.</li><li>Assigning apps to nested groups.</li><li>Applying licenses to nested groups.</li><li>Adding distribution groups in nesting scenarios.</li><li> Adding security groups as members of mail-enabled security groups</li></ul>
### To add a group as a member of another group
active-directory Active Directory Licensing Whatis Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-licensing-whatis-azure-portal.md
Until now, licenses could only be assigned at the individual user level, which c
To address those challenges, Azure AD now includes group-based licensing. You can assign one or more product licenses to a group. Azure AD ensures that the licenses are assigned to all members of the group. Any new members who join the group are assigned the appropriate licenses. When they leave the group, those licenses are removed. This licensing management eliminates the need for automating license management via PowerShell to reflect changes in the organization and departmental structure on a per-user basis. ## Licensing requirements
-You must have one of the following licenses to use group-based licensing:
+You must have one of the following licenses **for every user who benefits from** group-based licensing:
- Paid or trial subscription for Azure AD Premium P1 and above
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/create-access-review.md
If you have assigned guests as reviewers and they have not accepted the invite,
## Create reviews via APIs
-You can also create access reviews using APIs. What you do to manage access reviews of groups and application users in the Azure portal can also be done using Microsoft Graph APIs. For more information, see the [Azure AD access reviews API reference](/graph/api/resources/accessreviewsv2-root?view=graph-rest-beta&preserve-view=true). For a code sample, see [Example of retrieving Azure AD access reviews via Microsoft Graph](https://techcommunity.microsoft.com/t5/Azure-Active-Directory/Example-of-retrieving-Azure-AD-access-reviews-via-Microsoft/m-p/236096).
+You can also create access reviews using APIs. What you do to manage access reviews of groups and application users in the Azure portal can also be done using Microsoft Graph APIs.
++ For more information, see the [Azure AD access reviews API reference](/graph/api/resources/accessreviewsv2-root?view=graph-rest-beta&preserve-view=true).++ For a tutorial, see [Use the access reviews API to review guest access to your Microsoft 365 groups](/graph/tutorial-accessreviews-m365group).++ For a code sample, see [Example of retrieving Azure AD access reviews via Microsoft Graph](https://techcommunity.microsoft.com/t5/Azure-Active-Directory/Example-of-retrieving-Azure-AD-access-reviews-via-Microsoft/m-p/236096). ## Next steps
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/configure-user-consent.md
Previously updated : 06/01/2020 Last updated : 06/01/2021
Set-AzureADMSAuthorizationPolicy `
## Risk-based step-up consent
-Risk-based step-up consent helps reduce user exposure to malicious apps that make [illicit consent requests](/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants). If Microsoft detects a risky end-user consent request, the request will require a "step-up" to admin consent instead. This capability is enabled by default, but it will only result in a behavior change when end-user consent is enabled.
+Risk-based step-up consent helps reduce user exposure to malicious apps that make [illicit consent requests](/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants). For example, consent requests for newly registered multi-tenant apps that are not [publisher verified](/azure/active-directory/develop/publisher-verification-overview) and require non-basic permissions are considered risky. If Microsoft detects a risky end-user consent request, the request will require a "step-up" to admin consent instead. This capability is enabled by default, but it will only result in a behavior change when end-user consent is enabled.
When a risky consent request is detected, the consent prompt will display a message indicating that admin approval is needed. If the [admin consent request workflow](configure-admin-consent-workflow.md) is enabled, the user can send the request to an admin for further review directly from the consent prompt. If it's not enabled, the following message will be displayed:
active-directory New Relic Limited Release Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/new-relic-limited-release-tutorial.md
In this tutorial, you'll learn how to integrate New Relic with Azure Active Dire
To get started, you need: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* A New Relic subscription that's enabled for single sign-on (SSO).
+* A New Relic organization on the [New Relic One account/user model](https://docs.newrelic.com/docs/accounts/original-accounts-billing/original-product-based-pricing/overview-changes-pricing-user-model/#user-models) and on either Pro or Enterprise edition. For more information, see [New Relic requirements](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more).
## Scenario description
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the **Basic SAML Configuration** section, fill in values for **Identifier** and **Reply URL**.
- * Retrieve these values by using the New Relic **My Organization** application. To use this application:
- 1. [Sign in](https://login.newrelic.com/) to New Relic.
- 1. On the top menu, select **Apps**.
- 1. In the **Your apps** section, select **My Organization** > **Authentication domains**.
- 1. Choose the authentication domain to which you want Azure AD SSO to connect (if you have more than one authentication domain). Most companies only have one authentication domain called **Default**. If there's only one authentication domain, you don't need to select anything.
+ * Retrieve these values from the [New Relic authentication domain UI](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more/#ui). From there:
+ 1. If you have more than one authentication domain, choose the one to which you want Azure AD SSO to connect. Most companies only have one authentication domain called **Default**. If there's only one authentication domain, you don't need to select anything.
1. In the **Authentication** section, **Assertion consumer URL** contains the value to use for **Reply URL**. 1. In the **Authentication** section, **Our entity ID** contains the value to use for **Identifier**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure New Relic SSO
-Follow these steps to configure SSO at New Relic.
+Follow these steps to configure SSO at New Relic.
1. [Sign in](https://login.newrelic.com/) to New Relic.
-1. On the top menu, select **Apps**.
-
-1. In the **Your apps** section, select **My Organization** > **Authentication domains**.
+1. Go to the [authentication domain UI](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more/#ui).
1. Choose the authentication domain to which you want Azure AD SSO to connect (if you have more than one authentication domain). Most companies only have one authentication domain called **Default**. If there's only one authentication domain, you don't need to select anything.
In this section, you create a user called B.Simon in New Relic.
1. [Sign in](https://login.newrelic.com/) to New Relic.
-1. On the top menu, select **Apps**.
-
-1. In the **Your apps** section, select **User Management**.
+1. Go to the [**User management** UI](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/add-manage-users-groups-roles/#where).
1. Select **Add user**.
In this section, you create a user called B.Simon in New Relic.
1. For **Email**, enter the value that will be sent by Azure AD SSO.
- 1. Choose a user **Type** and a user **Group** for the user. For a test user, **Basic User** for Type and **User** for Group are reasonable choices.
+ 1. Choose a user **Type** and a user **Group** for the user. For a test user, **Basic user** for Type and **User** for Group are reasonable choices.
1. To save the user, select **Add User**.
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure New Relic you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once done, you can verify that your users have been added in New Relic by going to the [**User management** UI](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/add-manage-users-groups-roles/#where) and seeing if they're there.
+
+Next, you will probably want to assign your users to specific New Relic accounts or roles. To learn more about this, see [User management concepts](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/add-manage-users-groups-roles/#understand-concepts).
+
+In New Relic's authentication domain UI, you can configure [other settings](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-user-management/authentication-domains-saml-sso-scim-more/#session-mgmt), like session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
Where `--enable-private-cluster` is a mandatory flag for a private cluster.
The following parameters can be leveraged to configure Private DNS Zone. -- "System" is the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group.-- If the Private DNS Zone is in a different subscription than the AKS cluster, you need to register Microsoft.ContainerServices in both the subscriptions.-- "None" means AKS will not create a Private DNS Zone. This requires you to Bring Your Own DNS Server and configure the DNS resolution for the Private FQDN. If you don't configure DNS resolution, DNS is only resolvable within the agent nodes and will cause cluster issues after deployment. -- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" requires you to create a Private DNS Zone in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource Id of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` and `vnet contributor` roles.-- "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`
+- "System", which is also the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group.
+- "None", which means AKS will not create a Private DNS Zone (PREVIEW). This requires you to Bring Your Own DNS Server and configure the DNS resolution for the Private FQDN. If you don't configure DNS resolution, DNS is only resolvable within the agent nodes and will cause cluster issues after deployment.
+- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID", which requires you to create a Private DNS Zone in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource Id of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` and `vnet contributor` roles.
+ - If the Private DNS Zone is in a different subscription than the AKS cluster, you need to register Microsoft.ContainerServices in both the subscriptions.
+ - "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`
### Prerequisites
app-service App Service Hybrid Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-hybrid-connections.md
The Hybrid Connections feature requires a relay agent in the network that hosts
This tool runs on Windows Server 2012 and later. The HCM runs as a service and connects outbound to Azure Relay on port 443.
+> [!NOTE]
+> Hybrid Connection Manager cannot coexist with Biztalk Hybrid Connection Manager or Service Bus for Windows Server. Hence when installing HCM, any versions of these packages should be removed first.
+>
+ After installing HCM, you can run HybridConnectionManagerUi.exe to use the UI for the tool. This file is in the Hybrid Connection Manager installation directory. In Windows 10, you can also just search for *Hybrid Connection Manager UI* in your search box. :::image type="content" source="media/app-service-hybrid-connections/hybridconn-hcm.png" alt-text="Screenshot of Hybrid Connection Manager":::
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-resource-manager.md
The quickstart uses the `copy` element to create multiple instances of key-value
> [!IMPORTANT] > This template requires App Configuration resource provider version `2020-07-01-preview` or later. This version uses the `reference` function to read key-values. The `listKeyValue` function that was used to read key-values in the previous version is not available starting in version `2020-07-01-preview`. Two Azure resources are defined in the template:
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-queue.md
This section describes the global configuration settings available for this bind
|visibilityTimeout|00:00:00|The time interval between retries when processing of a message fails. | |batchSize|16|The number of queue messages that the Functions runtime retrieves simultaneously and processes in parallel. When the number being processed gets down to the `newBatchThreshold`, the runtime gets another batch and starts processing those messages. So the maximum number of concurrent messages being processed per function is `batchSize` plus `newBatchThreshold`. This limit applies separately to each queue-triggered function. <br><br>If you want to avoid parallel execution for messages received on one queue, you can set `batchSize` to 1. However, this setting eliminates concurrency as long as your function app runs only on a single virtual machine (VM). If the function app scales out to multiple VMs, each VM could run one instance of each queue-triggered function.<br><br>The maximum `batchSize` is 32. | |maxDequeueCount|5|The number of times to try processing a message before moving it to the poison queue.|
-|newBatchThreshold|batchSize/2|Whenever the number of messages being processed concurrently gets down to this number, the runtime retrieves another batch.|
+|newBatchThreshold|N*batchSize/2|Whenever the number of messages being processed concurrently gets down to this number, the runtime retrieves another batch.<br><br>`N` represents the number of vCPUs available when running on App Service or Premium Plans. Its value is `1` for the Consumption Plan.|
|messageEncoding|base64| This setting is only available in [extension version 5.0.0 and higher](#storage-extension-5x-and-higher). It represents the encoding format for messages. Valid values are `base64` and `none`.| ## Next steps
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agents-overview.md
The following tables provide a quick comparison of the Azure Monitor agents for
| | Azure Monitor agent (preview) | Diagnostics<br>extension (LAD) | Telegraf<br>agent | Log Analytics<br>agent | Dependency<br>agent | |:|:|:|:|:|:|
-| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
+| **Environments supported** (see table below for supported operating systems) | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
| **Agent requirements** | None | None | None | None | Requires Log Analytics agent | | **Data collected** | Syslog<br>Performance | Syslog<br>Performance | Performance | Syslog<br>Performance| Process dependencies<br>Network connection metrics | | **Data sent to** | Azure Monitor Logs<br>Azure Monitor Metrics | Azure Storage<br>Event Hub | Azure Monitor Metrics | Azure Monitor Logs | Azure Monitor Logs<br>(through Log Analytics agent) |
The following tables list the operating systems that are supported by the Azure
| Operating system | Azure Monitor agent | Log Analytics agent | Dependency agent | Diagnostics extension | |:|::|::|::|::| | Windows Server 2019 | X | X | X | X |
+| Windows Server 2019 Core | X | | | |
| Windows Server 2016 | X | X | X | X |
-| Windows Server 2016 Core | | | | X |
+| Windows Server 2016 Core | X | | | X |
| Windows Server 2012 R2 | X | X | X | X | | Windows Server 2012 | X | X | X | X | | Windows Server 2008 R2 SP1 | X | X | X | X |
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 04/12/2021 Last updated : 06/02/2021 # Troubleshooting problems in Azure Monitor metric alerts
Consider the following restrictions for metric alert rule names:
- Metric alert rule names must be unique within a resource group - Metric alert rule names canΓÇÖt contain the following characters: * # & + : < > ? @ % { } \ / - Metric alert rule names canΓÇÖt end with a space or a period
+- The combined resource group name and alert rule name canΓÇÖt exceed 253 characters
> [!NOTE] > If the alert rule name contains characters that aren't alphabetic or numeric (for example: spaces, punctuation marks or symbols), these characters may be URL-encoded when retrieved by certain clients.
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
It is worth mentioning that the collection endpoint pre-aggregates events before
||--|-|| | .NET Core and .NET Framework | Supported (V2.13.1+)| Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Supported (V2.7.2+) via [GetMetric](get-metric.md) | | Java | Not Supported | Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Not Supported |
-| Node.js | Not Supported | Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Not Supported |
+| Node.js | Supported (V2.0.0+) | Supported via [TrackMetric](api-custom-events-metrics.md#trackmetric)| Not Supported |
| Python | Not Supported | Supported | Partially supported via [OpenCensus.stats](opencensus-python.md#metrics) | > [!NOTE]
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Now the agent will collect metrics from each of the input plug-ins specified and
The preceding walkthrough provides information on how to configure the Telegraf agent to collect metrics from a few basic input plug-ins. The Telegraf agent has support for over 150 input plug-ins, with some supporting additional configuration options. InfluxData has published a [list of supported plugins](https://docs.influxdata.com/telegraf/v1.15/plugins/inputs/) and instructions on [how to configure them](https://docs.influxdata.com/telegraf/v1.15/administration/configuration/).
-Additionally, in this walkthrough, you used the Telegraf agent to emit metrics about the VM the agent is deployed on. The Telegraf agent can also be used as a collector and forwarder of metrics for other resources. To learn how to configure the agent to emit metrics for other Azure resources, see [Azure Monitor Custom Metric Output for Telegraf](https://github.com/influxdat).
+Additionally, in this walkthrough, you used the Telegraf agent to emit metrics about the VM the agent is deployed on. The Telegraf agent can also be used as a collector and forwarder of metrics for other resources. To learn how to configure the agent to emit metrics for other Azure resources, see [Azure Monitor Custom Metric Output for Telegraf](https://github.com/influxdat).
## Clean up resources When they're no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so, select the resource group for the virtual machine and select **Delete**. Then confirm the name of the resource group to delete. ## Next steps-- Learn more about [custom metrics](./metrics-custom-overview.md).
+- Learn more about [custom metrics](./metrics-custom-overview.md).
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Supported tables currently are limited to those specified below. All data from t
| AutoscaleEvaluationsLog | | | AutoscaleScaleActionsLog | | | AWSCloudTrail | |
-| AzureActivityV2 | |
| AzureAssessmentRecommendation | | | AzureDevOpsAuditing | | | BehaviorAnalytics | |
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
Welcome to what's new in the Azure Monitor docs for April 2021. This article lis
**Updated articles** -- [Configure data collection for the Azure Monitor agent (preview)](/azure/azure-monitoragents/data-collection-rule-azure-monitor-agent.md)-- [Overview of Azure Monitor agents](/azure/azure-monitoragents/agents-overview.md)-- [Collect Windows and Linux performance data sources with Log Analytics agent](/azure/azure-monitoragents/data-sources-performance-counters.md)
+- [Configure data collection for the Azure Monitor agent (preview)](agents/data-collection-rule-azure-monitor-agent.md)
+- [Overview of Azure Monitor agents](agents/agents-overview.md)
+- [Collect Windows and Linux performance data sources with Log Analytics agent](agents/data-sources-performance-counters.md)
## Alerts **Updated articles** -- [Action rules (preview)](/azure/azure-monitoralerts/alerts-action-rules.md)-- [Create, view, and manage log alerts using Azure Monitor](/azure/azure-monitoralerts/alerts-log.md)-- [Troubleshoot problems in IT Service Management Connector](/azure/azure-monitoralerts/itsmc-troubleshoot-overview.md)
+- [Action rules (preview)](alerts/alerts-action-rules.md)
+- [Create, view, and manage log alerts using Azure Monitor](alerts/alerts-log.md)
+- [Troubleshoot problems in IT Service Management Connector](alerts/itsmc-troubleshoot-overview.md)
## Application Insights **New articles** -- [Sampling overrides (preview) - Azure Monitor Application Insights for Java](/azure/azure-monitorapp/java-standalone-sampling-overrides.md)-- [Configuring JMX metrics](/azure/azure-monitorapp/java-jmx-metrics-configuration.md)
+- [Sampling overrides (preview) - Azure Monitor Application Insights for Java](app/java-standalone-sampling-overrides.md)
+- [Configuring JMX metrics](app/java-jmx-metrics-configuration.md)
**Updated articles** -- [Application Insights for web pages](/azure/azure-monitorapp/javascript.md)-- [Configuration options - Azure Monitor Application Insights for Java](/azure/azure-monitorapp/java-standalone-config.md)-- [Quickstart: Start monitoring your website with Azure Monitor Application Insights](/azure/azure-monitorapp/website-monitoring.md)-- [Visualizations for Application Change Analysis (preview)](/azure/azure-monitorapp/change-analysis-visualizations.md)-- [Use Application Change Analysis (preview) in Azure Monitor](/azure/azure-monitorapp/change-analysis.md)-- [Application Insights API for custom events and metrics](/azure/azure-monitorapp/api-custom-events-metrics.md)-- [Java codeless application monitoring Azure Monitor Application Insights](/azure/azure-monitorapp/java-in-process-agent.md)-- [Enable Snapshot Debugger for .NET apps in Azure App Service](/azure/azure-monitorapp/snapshot-debugger-appservice.md)-- [Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](/azure/azure-monitorapp/snapshot-debugger-function-app.md)-- [<a id=troubleshooting></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](/azure/azure-monitorapp/snapshot-debugger-troubleshoot.md)-- [Release notes for Azure Web App extension for Application Insights](/azure/azure-monitorapp/web-app-extension-release-notes.md)-- [Set up Azure Monitor for your Python application](/azure/azure-monitorapp/opencensus-python.md)-- [Upgrading from Application Insights Java 2.x SDK](/azure/azure-monitorapp/java-standalone-upgrade-from-2x.md)-- [Use Stream Analytics to process exported data from Application Insights](/azure/azure-monitorapp/export-stream-analytics.md)-- [Troubleshooting guide: Azure Monitor Application Insights for Java](/azure/azure-monitorapp/java-standalone-troubleshoot.md)
+- [Application Insights for web pages](app/javascript.md)
+- [Configuration options - Azure Monitor Application Insights for Java](app/java-standalone-config.md)
+- [Quickstart: Start monitoring your website with Azure Monitor Application Insights](app/website-monitoring.md)
+- [Visualizations for Application Change Analysis (preview)](app/change-analysis-visualizations.md)
+- [Use Application Change Analysis (preview) in Azure Monitor](app/change-analysis.md)
+- [Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)
+- [Java codeless application monitoring Azure Monitor Application Insights](app/java-in-process-agent.md)
+- [Enable Snapshot Debugger for .NET apps in Azure App Service](app/snapshot-debugger-appservice.md)
+- [Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](app/snapshot-debugger-function-app.md)
+- [<a id=troubleshooting></a> Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](app/snapshot-debugger-troubleshoot.md)
+- [Release notes for Azure Web App extension for Application Insights](app/web-app-extension-release-notes.md)
+- [Set up Azure Monitor for your Python application](app/opencensus-python.md)
+- [Upgrading from Application Insights Java 2.x SDK](app/java-standalone-upgrade-from-2x.md)
+- [Use Stream Analytics to process exported data from Application Insights](app/export-stream-analytics.md)
+- [Troubleshooting guide: Azure Monitor Application Insights for Java](app/java-standalone-troubleshoot.md)
## Containers **Updated articles** -- [Troubleshooting Container insights](/azure/azure-monitorcontainers/container-insights-troubleshoot.md)-- [How to view Kubernetes logs, events, and pod metrics in real-time](/azure/azure-monitorcontainers/container-insights-livedata-overview.md)-- [How to query logs from Container insights](/azure/azure-monitorcontainers/container-insights-log-search.md)-- [Configure PV monitoring with Container insights](/azure/azure-monitorcontainers/container-insights-persistent-volumes.md)-- [Monitor your Kubernetes cluster performance with Container insights](/azure/azure-monitorcontainers/container-insights-analyze.md)-- [Configure Azure Red Hat OpenShift v3 with Container insights](/azure/azure-monitorcontainers/container-insights-azure-redhat-setup.md)-- [Configure Azure Red Hat OpenShift v4.x with Container insights](/azure/azure-monitorcontainers/container-insights-azure-redhat4-setup.md)-- [Enable monitoring of Azure Arc enabled Kubernetes cluster](/azure/azure-monitorcontainers/container-insights-enable-arc-enabled-clusters.md)-- [Configure hybrid Kubernetes clusters with Container insights](/azure/azure-monitorcontainers/container-insights-hybrid-setup.md)-- [Recommended metric alerts (preview) from Container insights](/azure/azure-monitorcontainers/container-insights-metric-alerts.md)-- [Enable Container insights](/azure/azure-monitorcontainers/container-insights-onboard.md)-- [Container insights overview](/azure/azure-monitorcontainers/container-insights-overview.md)-- [Configure scraping of Prometheus metrics with Container insights](/azure/azure-monitorcontainers/container-insights-prometheus-integration.md)
+- [Troubleshooting Container insights](containers/container-insights-troubleshoot.md)
+- [How to view Kubernetes logs, events, and pod metrics in real-time](containers/container-insights-livedata-overview.md)
+- [How to query logs from Container insights](containers/container-insights-log-search.md)
+- [Configure PV monitoring with Container insights](containers/container-insights-persistent-volumes.md)
+- [Monitor your Kubernetes cluster performance with Container insights](containers/container-insights-analyze.md)
+- [Configure Azure Red Hat OpenShift v3 with Container insights](containers/container-insights-azure-redhat-setup.md)
+- [Configure Azure Red Hat OpenShift v4.x with Container insights](containers/container-insights-azure-redhat4-setup.md)
+- [Enable monitoring of Azure Arc enabled Kubernetes cluster](containers/container-insights-enable-arc-enabled-clusters.md)
+- [Configure hybrid Kubernetes clusters with Container insights](containers/container-insights-hybrid-setup.md)
+- [Recommended metric alerts (preview) from Container insights](containers/container-insights-metric-alerts.md)
+- [Enable Container insights](containers/container-insights-onboard.md)
+- [Container insights overview](containers/container-insights-overview.md)
+- [Configure scraping of Prometheus metrics with Container insights](containers/container-insights-prometheus-integration.md)
## Essentials **Updated articles** -- [Advanced features of the Azure metrics explorer](/azure/azure-monitoressentials/metrics-charts.md)-- [Application Insights log-based metrics](/azure/azure-monitoressentials/app-insights-metrics.md)-- [Getting started with Azure Metrics Explorer](/azure/azure-monitoressentials/metrics-getting-started.md)
+- [Advanced features of the Azure metrics explorer](essentials/metrics-charts.md)
+- [Application Insights log-based metrics](essentials/app-insights-metrics.md)
+- [Getting started with Azure Metrics Explorer](essentials/metrics-getting-started.md)
## General
Welcome to what's new in the Azure Monitor docs for April 2021. This article lis
**Updated articles** -- [Azure Monitor Network Insights](/azure/azure-monitorinsights/network-insights-overview.md)-- [Wire Data 2.0 (Preview) solution in Azure Monitor (Retired)](/azure/azure-monitorinsights/wire-data.md)-- [Monitor your SQL deployments with SQL insights (preview)](/azure/azure-monitorinsights/sql-insights-overview.md)
+- [Azure Monitor Network Insights](insights/network-insights-overview.md)
+- [Wire Data 2.0 (Preview) solution in Azure Monitor (Retired)](insights/wire-data.md)
+- [Monitor your SQL deployments with SQL insights (preview)](insights/sql-insights-overview.md)
## Logs **Updated articles** -- [Create a Log Analytics workspace in the Azure portal](/azure/azure-monitorlogs/quick-create-workspace.md)-- [Manage usage and costs with Azure Monitor Logs](/azure/azure-monitorlogs/manage-cost-storage.md)-- [Log data ingestion time in Azure Monitor](/azure/azure-monitorlogs/data-ingestion-time.md)
+- [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md)
+- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md)
+- [Log data ingestion time in Azure Monitor](logs/data-ingestion-time.md)
## Virtual Machines **New articles** -- [Troubleshoot VM insights](/azure/azure-monitorvm/vminsights-troubleshoot.md)
+- [Troubleshoot VM insights](vm/vminsights-troubleshoot.md)
**Updated articles** -- [Create interactive reports VM insights with workbooks](/azure/azure-monitorvm/vminsights-workbooks.md)-- [Enable VM insights overview](/azure/azure-monitorvm/vminsights-enable-overview.md)-- [Troubleshoot Azure Monitor for VMs guest health (preview)](/azure/azure-monitorvm/vminsights-health-troubleshoot.md)-- [Monitoring Azure virtual machines with Azure Monitor](/azure/azure-monitorvm/monitor-vm-azure.md)-- [Integrate System Center Operations Manager with VM insights Map feature](/azure/azure-monitorvm/service-map-scom.md)-- [How to create alerts from VM insights](/azure/azure-monitorvm/vminsights-alerts.md)-- [Configure Log Analytics workspace for VM insights](/azure/azure-monitorvm/vminsights-configure-workspace.md)-- [Enable VM insights by using Azure Policy](/azure/azure-monitorvm/vminsights-enable-policy.md)-- [Enable VM insights using Resource Manager templates](/azure/azure-monitorvm/vminsights-enable-resource-manager.md)-- [VM insights Generally Available (GA) Frequently Asked Questions](/azure/azure-monitorvm/vminsights-ga-release-faq.md)-- [Enable VM insights guest health (preview)](/azure/azure-monitorvm/vminsights-health-enable.md)-- [Disable monitoring of your VMs in VM insights](/azure/azure-monitorvm/vminsights-optout.md)-- [Overview of VM insights](/azure/azure-monitorvm/vminsights-overview.md)-- [How to chart performance with VM insights](/azure/azure-monitorvm/vminsights-performance.md)
+- [Create interactive reports VM insights with workbooks](vm/vminsights-workbooks.md)
+- [Enable VM insights overview](vm/vminsights-enable-overview.md)
+- [Troubleshoot Azure Monitor for VMs guest health (preview)](vm/vminsights-health-troubleshoot.md)
+- [Monitoring Azure virtual machines with Azure Monitor](vm/monitor-vm-azure.md)
+- [Integrate System Center Operations Manager with VM insights Map feature](vm/service-map-scom.md)
+- [How to create alerts from VM insights](vm/vminsights-alerts.md)
+- [Configure Log Analytics workspace for VM insights](vm/vminsights-configure-workspace.md)
+- [Enable VM insights by using Azure Policy](vm/vminsights-enable-policy.md)
+- [Enable VM insights using Resource Manager templates](vm/vminsights-enable-resource-manager.md)
+- [VM insights Generally Available (GA) Frequently Asked Questions](vm/vminsights-ga-release-faq.md)
+- [Enable VM insights guest health (preview)](vm/vminsights-health-enable.md)
+- [Disable monitoring of your VMs in VM insights](vm/vminsights-optout.md)
+- [Overview of VM insights](vm/vminsights-overview.md)
+- [How to chart performance with VM insights](vm/vminsights-performance.md)
## Visualizations **Updated articles** -- [Programmatically manage workbooks](/azure/azure-monitorvisualize/workbooks-automate.md)
+- [Programmatically manage workbooks](visualize/workbooks-automate.md)
## Community contributors
azure-percept How To Select Update Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-select-update-package.md
To ensure you apply the correct update package to your dev kit, you must first d
> [!WARNING] > Applying the incorrect update package could result in your dev kit becoming inoperable. It is important that you follow these steps to ensure you apply the correct update package.
-1. Power on your dev kit and ensure it's connected to Azure Percept Studio.
-1. In Azure Percept Studio, select **Devices** from the left menu.
-1. From the device list, select the name of the device that is currently connected. The status will say **Connected**.
-1. Select **Open device in IoT Hub**
-1. You may be asked to sign into your Azure account again.
-1. Select **Device twin**.
-1. Scroll through the device twin properties and locate **"model"** and **"swVersion"** under **"deviceInformation"** and make a note of their values.
+Option 1:
+1. Log in to the [Azure Percept Studio](https://docs.microsoft.com/en-us/azure/azure-percept/overview-azure-percept-studio).
+2. In **Devices**, choose your devkit device.
+3. In the **General** tab, look for the **Model** and **SW Version** information.
+
+Option 2:
+1. View the **IoT Edge Device** of **IoT Hub** service from Microsoft Azure Portal.
+2. Choose your devkit device from the device list.
+3. Select **Device twin**.
+4. Scroll through the device twin properties and locate **"model"** and **"swVersion"** under **"deviceInformation"** and make a note of their values.
## Determine the correct update package Using the **model** and **swVersion** identified in the previous section, check the table below to determine which update package to download.
Using the **model** and **swVersion** identified in the previous section, check
|model |swVersion |Update method |Download links |Note | ||||||
-|PE-101 |2020.108.101.105, <br>2020.108.114.120, <br>2020.109.101.122, <br>2020.109.116.120, <br>2021.101.106.118 |**USB only** |[2021.104.110.103 USB update package](https://go.microsoft.com/fwlink/?linkid=2155734) |Public Preview major release |
-|PE-101 |2021.102.108.112, <br> |OTA or USB |[2021.104.110.103 OTA manifest](https://go.microsoft.com/fwlink/?linkid=2155625)<br>[2021.104.110.103 OTA update package](https://go.microsoft.com/fwlink/?linkid=2161538)<br>[2021.104.110.103 USB update package](https://go.microsoft.com/fwlink/?linkid=2155734) |Public Preview major release |
-|APDK-101 |All swVersions |OTA or USB | [2021.105.111.112 OTA manifest](https://go.microsoft.com/fwlink/?linkid=2163554)<br>[2021.105.111.112 OTA update package](https://go.microsoft.com/fwlink/?linkid=2163456)<br>[2021.105.111.112 USB update package](https://go.microsoft.com/fwlink/?linkid=2163555) |Latest monthly release (May) |
+|PE-101 |2020.108.101.105, <br>2020.108.114.120, <br>2020.109.101.122, <br>2020.109.116.120, <br>2021.101.106.118 |**USB only** |[2021.105.111.112 USB update package](https://go.microsoft.com/fwlink/?linkid=2155734) |May release (2105) |
+|PE-101 |2021.102.108.112, <br> |OTA or USB |[2021.105.111.112 OTA manifest (PE-101)](https://go.microsoft.com/fwlink/?linkid=2155625)<br>[2021.105.111.112 OTA update package](https://go.microsoft.com/fwlink/?linkid=2161538)<br>[2021.105.111.112 USB update package](https://go.microsoft.com/fwlink/?linkid=2155734) |May release (2105) |
+|APDK-101 |All swVersions |OTA or USB | [2021.105.111.112 OTA manifest (APDK-101)](https://go.microsoft.com/fwlink/?linkid=2163554)<br>[2021.105.111.112 OTA update package](https://go.microsoft.com/fwlink/?linkid=2163456)<br>[2021.105.111.112 USB update package](https://go.microsoft.com/fwlink/?linkid=2163555) |May release (2105) |
## Next steps
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-to-management-group.md
resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2019-09-01'
} ```
-To target another management group, add a module. Use the [managementGroup function](bicep-functions-scope.md#managementgroup) to set the `scope` property.
+To target another management group, add a module. Use the [managementGroup function](bicep-functions-scope.md#managementgroup) to set the `scope` property. Provide the management group name.
```bicep targetScope = 'managementGroup'
module exampleModule 'module.bicep' = {
You can also target subscriptions within a management group. The user deploying the template must have access to the specified scope.
-To target a subscription within the management group, add a module. Use the [subscription function](bicep-functions-scope.md#subscription) to set the `scope` property.
+To target a subscription within the management group, add a module. Use the [subscription function](bicep-functions-scope.md#subscription) to set the `scope` property. Provide the subscription ID.
```bicep targetScope = 'managementGroup'
To learn about other scopes, see:
* [Resource group deployments](deploy-to-resource-group.md) * [Subscription deployments](deploy-to-subscription.md)
-* [Tenant deployments](deploy-to-tenant.md)
+* [Tenant deployments](deploy-to-tenant.md)
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-to-subscription.md
targetScope = 'subscription'
// resource group created in target subscription resource exampleResource 'Microsoft.Resources/resourceGroups@2020-10-01' = { ...
-}
+}
``` For examples of deploying to the subscription, see [Create resource groups](#create-resource-groups) and [Assign policy definition](#assign-policy-definition).
module exampleModule 'module.bicep' = {
### Scope to resource group
-To deploy resources to a resource group within the subscription, add a module and set its `scope` property. If the resource group already exists, use the [resourceGroup function](bicep-functions-scope.md#resourcegroup) to set the scope value.
+To deploy resources to a resource group within the subscription, add a module and set its `scope` property. If the resource group already exists, use the [resourceGroup function](bicep-functions-scope.md#resourcegroup) to set the scope value. Provide the resource group name.
```bicep targetScope = 'subscription'
To learn about other scopes, see:
* [Resource group deployments](deploy-to-resource-group.md) * [Management group deployments](deploy-to-management-group.md)
-* [Tenant deployments](deploy-to-tenant.md)
+* [Tenant deployments](deploy-to-tenant.md)
azure-resource-manager Deploy To Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-to-tenant.md
resource mgName_resource 'Microsoft.Management/managementGroups@2020-02-01' = {
### Scope to management group
-To target a management group within the tenant, add a module. Use the [managementGroup function](bicep-functions-scope.md#managementgroup) to set its `scope` property.
+To target a management group within the tenant, add a module. Use the [managementGroup function](bicep-functions-scope.md#managementgroup) to set its `scope` property. Provide the management group name.
```bicep targetScope = 'tenant'
module 'module.bicep' = {
### Scope to subscription
-To target a subscription within the tenant, add a module. Use the [subscription function](bicep-functions-scope.md#subscription) to set its `scope` property.
+To target a subscription within the tenant, add a module. Use the [subscription function](bicep-functions-scope.md#subscription) to set its `scope` property. Provide the subscription ID.
```bicep targetScope = 'tenant'
azure-sql Automatic Tuning Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automatic-tuning-enable.md
Automatic tuning can be enabled at the server or the database level through:
- [T-SQL](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true) commands > [!NOTE]
-> For Azure SQL Managed Instance, the supported option FORCE_LAST_GOOD_PLAN can only be configured through [T-SQL](https://azure.microsoft.com/blog/automatic-tuning-introduces-automatic-plan-correction-and-t-sql-management) only. The Azure portal based configuration and automatic index tuning options described in this article do not apply to Azure SQL Managed Instance.
+> For Azure SQL Managed Instance, the supported option FORCE_LAST_GOOD_PLAN can only be configured through [T-SQL](https://azure.microsoft.com/blog/automatic-tuning-introduces-automatic-plan-correction-and-t-sql-management). The Azure portal based configuration and automatic index tuning options described in this article do not apply to Azure SQL Managed Instance.
> [!NOTE] > Configuring automatic tuning options through the ARM (Azure Resource Manager) template is not supported at this time.
To receive automated email notifications on recommendations made by the automati
- Read the [Automatic tuning article](automatic-tuning-overview.md) to learn more about automatic tuning and how it can help you improve your performance. - See [Performance recommendations](database-advisor-implement-performance-recommendations.md) for an overview of Azure SQL Database performance recommendations.-- See [Query Performance Insights](query-performance-insight-use.md) to learn about viewing the performance impact of your top queries.
+- See [Query Performance Insights](query-performance-insight-use.md) to learn about viewing the performance impact of your top queries.
azure-web-pubsub Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/resource-faq.md
This is the FAQ of Azure Web PubSub service.
## Is Azure Web PubSub service ready for production use?
-The Azure Web PubSub service is in public preview state and not committed SLA.
+The Azure Web PubSub service is in public preview state and doesn't have a committed SLA.
+
+## How do I choose between Azure SignalR Service and Azure Web PubSub service?
+
+Both [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service) and [Azure Web PubSub service](https://azure.microsoft.com/services/web-pubsub) help customers build real-time web applications easily with large scale and high availability and enable customers to focus on their business logic instead of managing the messaging infrastructure. In general, you may choose Azure SignalR Service if you already use SignalR library to build real-time application. Instead, if you're looking for a generic solution to build real-time application based on WebSocket and publish-subscribe pattern, you may choose Azure Web PubSub service. The Azure Web PubSub service is **not** a replacement for Azure SignalR Service. They're targeting different scenarios.
+
+Azure SignalR Service is more suitable if:
+
+- You're already using ASP.NET or ASP.NET Core SignalR, primarily using .NET or need to integrate with .NET ecosystem (like Blazor).
+- There's a SignalR client available for your platform.
+- You need an established protocol that supports a wide variety of calling patterns (RPC and streaming), transports (WebSocket, server sent events, and long polling) and with a client that manages the connection lifetime on your behalf.
+
+Azure Web PubSub service is more suitable for situations where:
+
+- You need to build real-time applications based on WebSocket technology or publish-subscribe over WebSocket.
+- You want to build your own subprotocol or use existing advanced protocols over WebSocket (for example, MQTT, AMQP over WebSocket).
+- You're looking for a lightweight server, for example, sending messages to client without going through the configured backend.
## Where does my data reside?
backup Backup Azure Manage Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-manage-vms.md
In the Azure portal, the Recovery Services vault dashboard provides access to va
* The total size of all backup snapshots. * The number of VMs that are enabled for backups.
-You can manage backups by using the dashboard and by drilling down to individual VMs. To begin machine backups, open the vault on the dashboard.
+You can manage backups by using the dashboard and by drilling down to individual VMs. To begin machine backups, open the vault on the dashboard:
![Full dashboard view with slider](./media/backup-azure-manage-vms/bottom-slider.png)
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-neural-voice.md
# What is Custom Neural Voice?
-Custom Neural Voice is a text-to-speech (TTS) feature that lets you create a one-of-a-kind customized synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data. Based on the Neural TTS technology and the multi-lingual multi-speaker universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally.
+Custom Neural Voice is a text-to-speech (TTS) feature that lets you create a one-of-a-kind customized synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data. Based on the Neural TTS technology and the multi-lingual multi-speaker universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the supported [languages](language-support.md#customization) for Custom Neural Voice and cross-lingual feature.
> [!NOTE] > The Custom Neural Voice feature requires registration, and access to it is limited based upon MicrosoftΓÇÖs eligibility and use criteria. Customers who wish to use this feature are required to register their use cases through the [intake form](https://aka.ms/customneural).
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
After your dataset has been validated, you can use it to build your custom neura
2. Select the neural training method for your model and target language.
-By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language (preview) for your voice model. Check the languages supported for custom neural voice: [language for customization](language-support.md#customization).
+By default, your voice model is trained in the same language of your training data. You can also select to create a secondary language (preview) for your voice model. Check the languages supported for custom neural voice and cross-lingual feature: [language for customization](language-support.md#customization).
3. Next, choose the dataset you want to use for training, and specify a speaker file.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
More than 75 standard voices are available in over 45 languages and locales, whi
### Customization
-Custom Voice is available in the neural tier (a.k.a, Custom Neural Voice). Check below for the languages supported.
+Custom Voice is available in the neural tier (a.k.a, Custom Neural Voice). Based on the Neural TTS technology and the multi-lingual multi-speaker universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. Check below for the languages supported.
> [!IMPORTANT] > The standard tier including the statistical parametric and the concatenative training methods of custom voice is being deprecated and will be retired on 2/29/2024. If you are using non-neural/standard Custom Voice, migrate to Custom Neural Voice immediately to enjoy the better quality and deploy the voices responsibly.
-| Language | Locale | Neural |
-|--|--|--|
-| Bulgarian (Bulgaria)| `bg-BG` | Yes |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Yes |
-| Chinese (Mandarin, Simplified), English bilingual | `zh-CN` bilingual | Yes |
-| Dutch (Netherlands) | `nl-NL` | Yes |
-| English (Australia) | `en-AU` | Yes |
-| English (India) | `en-IN` | Yes |
-| English (United Kingdom) | `en-GB` | Yes |
-| English (United States) | `en-US` | Yes |
-| French (Canada) | `fr-CA` | Yes |
-| French (France) | `fr-FR` | Yes |
-| German (Germany) | `de-DE` | Yes |
-| Italian (Italy) | `it-IT` | Yes |
-| Japanese (Japan) | `ja-JP` | Yes |
-| Korean (Korea) | `ko-KR` | Yes |
-| Portuguese (Brazil) | `pt-BR` | Yes |
-| Spanish (Mexico) | `es-MX` | Yes |
-| Spanish (Spain) | `es-ES` | Yes |
+| Language | Locale | Neural | Cross-lingual |
+|--|--|--|--|
+| Bulgarian (Bulgaria)| `bg-BG` | Yes | No |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Yes | Yes |
+| Chinese (Mandarin, Simplified), English bilingual | `zh-CN` bilingual | Yes | Yes |
+| Dutch (Netherlands) | `nl-NL` | Yes | No |
+| English (Australia) | `en-AU` | Yes | Yes |
+| English (India) | `en-IN` | Yes | No |
+| English (United Kingdom) | `en-GB` | Yes | Yes |
+| English (United States) | `en-US` | Yes | Yes |
+| French (Canada) | `fr-CA` | Yes | Yes |
+| French (France) | `fr-FR` | Yes | Yes |
+| German (Germany) | `de-DE` | Yes | Yes |
+| Italian (Italy) | `it-IT` | Yes | Yes |
+| Japanese (Japan) | `ja-JP` | Yes | Yes |
+| Korean (Korea) | `ko-KR` | Yes | Yes |
+| Portuguese (Brazil) | `pt-BR` | Yes | Yes |
+| Spanish (Mexico) | `es-MX` | Yes | Yes |
+| Spanish (Spain) | `es-ES` | Yes | Yes |
Select the right locale that matches the training data you have to train a custom voice model. For example, if the recording data you have is spoken in English with a British accent, select `en-GB`.
cognitive-services Concept Identification Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-identification-cards.md
The `readResults` node contains all of the recognized text. Text is organized by
```json {
- "status": "succeeded",
+ "status": "succeeded",
"createdDateTime": "2021-03-04T22:29:33Z", "lastUpdatedDateTime": "2021-03-04T22:29:36Z", "analyzeResult": {
The `readResults` node contains all of the recognized text. Text is organized by
} ], ...
+ }
+ ]
} ],
container-registry Container Registry Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-customer-managed-keys.md
Title: Encrypt registry with a customer-managed key description: Learn about encryption-at-rest of your Azure container registry, and how to encrypt your Premium registry with a customer-managed key stored in Azure Key Vault Previously updated : 03/03/2021 Last updated : 05/27/2021
This feature is available in the **Premium** container registry service tier. Fo
## Things to know
-* You can currently enable a customer-managed key only when you create a registry. When enabling the key, you configure a *user-assigned* managed identity to access the key vault.
+* You can currently enable a customer-managed key only when you create a registry. When enabling the key, you configure a *user-assigned* managed identity to access the key vault. Later, you can enable the registry's system-managed identity for key vault access if needed.
* After enabling encryption with a customer-managed key on a registry, you can't disable the encryption. * Azure Container Registry supports only RSA or RSA-HSM keys. Elliptic curve keys aren't currently supported. * [Content trust](container-registry-content-trust.md) is currently not supported in a registry encrypted with a customer-managed key.
az keyvault delete-policy \
--object-id $identityPrincipalID ```
-Revoking the key effectively blocks access to all registry data, since the registry can't access the encryption key. If access to the key is enabled or the deleted key is restored, your registry will pick the key so you can again access the encrypted registry data.
+Revoking the key effectively blocks access to all registry data, since the registry can't access the encryption key. If access to the key is enabled or the deleted key is restored, your registry will pick the key so you can again access the encrypted registry data.
## Advanced scenario: Key Vault firewall
-You might want to store the encryption key using an existing Azure key vault configured with a [Key Vault firewall](../key-vault/general/network-security.md), which denies public access and allows only private endpoint or selected virtual networks.
+> [!IMPORTANT]
+> Currently, during registry deployment, a registry's *user-assigned* identity can only be configured to access an encryption key in a key vault that allows public access, not one configured with a [Key Vault firewall](../key-vault/general/network-security.md).
+>
+> To access a key vault protected with a Key Vault firewall, the registry must bypass the firewall using its *system-managed* identity. Currently these settings can only be configured after the registry is deployed.
For this scenario, first create a new user-assigned identity, key vault, and container registry encrypted with a customer-managed key, using the [Azure CLI](#enable-customer-managed-keycli), [portal](#enable-customer-managed-keyportal), or [template](#enable-customer-managed-keytemplate). Detailed steps are in preceding sections in this article. > [!NOTE]
container-registry Container Registry Import Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-import-images.md
Title: Import container images description: Import container images to an Azure container registry by using Azure APIs, without needing to run Docker commands. Previously updated : 01/15/2021 Last updated : 05/28/2021 # Import container images to a container registry
az acr import \
## Import from an Azure container registry in a different AD tenant
-To import from an Azure container registry in a different Azure Active Directory tenant, specify the source registry by login server name, and provide username and password credentials that enable pull access to the registry. For example, use a [repository-scoped token](container-registry-repository-scoped-permissions.md) and password, or the appID and password of an Active Directory [service principal](container-registry-auth-service-principal.md) that has ACRPull access to the source registry.
+To import from an Azure container registry in a different Azure Active Directory tenant, specify the source registry by login server name, and provide credentials that enable pull access to the registry.
+
+### Cross-tenant import with username and password
+For example, use a [repository-scoped token](container-registry-repository-scoped-permissions.md) and password, or the appID and password of an Active Directory [service principal](container-registry-auth-service-principal.md) that has ACRPull access to the source registry.
```azurecli az acr import \
az acr import \
--password <SP_Passwd> ```
+### Cross-tenant import with access token
+
+To access the source registry using an identity in the source tenant that has registry permissions, you can get an access token:
+
+```azurecli
+# Login to Azure CLI with the identity, for example a user-assigned managed identity
+az login --identity --username <identity_ID>
+
+# Get access token returned by `az account get-access-token`
+az account get-access-token
+```
+
+In the target tenant, pass the access token as a password to the `az acr import` command. The source registry is specified by login server name. Notice that no username is needed in this command:
+
+```azurecli
+az acr import \
+ --name myregistry \
+ --source sourceregistry.azurecr.io/sourcerrepo:tag \
+ --image targetimage:tag \
+ --password <access-token>
+```
+ ## Import from a non-Azure private container registry Import an image from a non-Azure private registry by specifying credentials that enable pull access to the registry. For example, pull an image from a private Docker registry:
data-factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-data-tool.md
Previously updated : 06/17/2020 Last updated : 06/01/2021 # Copy Data tool in Azure Data Factory
You can preview part of the data from the selected source data store, which allo
![File settings](./media/copy-data-tool/file-format-settings.png)
-After the detection:
+After the detection, select **Preview data**:
![Detected file settings and preview](./media/copy-data-tool/after-detection.png)
Suppose that you have input folders in the following format:
Click the **Browse** button for **File or folder**, browse to one of these folders (for example, 2016->03->01->02), and click **Choose**. You should see 2016/03/01/02 in the text box.
-Then, replace **2016** with **{year}**, **03** with **{month}**, **01** with **{day}**, and **02** with **{hour}**, and press the **Tab** key. You should see drop-down lists to select the format for these four variables:
+Then, replace **2016** with **{year}**, **03** with **{month}**, **01** with **{day}**, and **02** with **{hour}**, and press the **Tab** key. When you select **Incremental load: time-partitioned folder/file names** in the **File loading behavior** section and you select **Schedule** or **Tumbling window** on the **Properties** page, you should see drop-down lists to select the format for these four variables:
![Filter file or folder](./media/copy-data-tool/filter-file-or-folder.png)
data-factory Quickstart Create Data Factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-copy-data-tool.md
Previously updated : 11/09/2020 Last updated : 06/01/2021 # Quickstart: Use the Copy Data tool to copy data
In this quickstart, you use the Azure portal to create a data factory. Then, you
1. After the creation is complete, you see the **Data Factory** page. Select the **Author & Monitor** tile to start the Azure Data Factory user interface (UI) application on a separate tab. -
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+
## Start the Copy Data tool
-1. On the **Let's get started** page, select the **Copy Data** tile to start the Copy Data tool.
+1. On the **Let's get started** page, select the **Copy Data** tile to start the Copy Data tool.
!["Copy Data" tile](./media/doc-common-process/get-started-page.png)
-1. On the **Properties** page of the Copy Data tool, you can specify a name for the pipeline and its description, then select **Next**.
+1. On the **Properties** page of the Copy Data tool, choose **Built-in copy task** under **Task type**, then select **Next**.
!["Properties" page](./media/quickstart-create-data-factory-copy-data-tool/copy-data-tool-properties-page.png)+ 1. On the **Source data store** page, complete the following steps:
- a. Click **+ Create new connection** to add a connection.
+ 1. Click **+ Create new connection** to add a connection.
- b. Select the linked service type that you want to create for the source connection. In this tutorial, we use **Azure Blob Storage**. Select it from the gallery, and then select **Continue**.
+ 1. Select the linked service type that you want to create for the source connection. In this tutorial, we use **Azure Blob Storage**. Select it from the gallery, and then select **Continue**.
- ![Select Blob](./media/quickstart-create-data-factory-copy-data-tool/select-blob-source.png)
-
- c. On the **New Linked Service (Azure Blob Storage)** page, specify a name for your linked service. Select your storage account from the **Storage account name** list, test connection, and then select **Create**.
-
- ![Configure the Azure Blob storage account](./media/quickstart-create-data-factory-copy-data-tool/configure-blob-storage.png)
-
- d. Select the newly created linked service as source, and then click **Next**.
+ ![Select Blob](./media/quickstart-create-data-factory-copy-data-tool/select-blob-source.png)
+ 1. On the **New connection (Azure Blob Storage)** page, specify a name for your connection. Select your Azure subscription from the **Azure subscription** list and your storage account from the **Storage account name** list, test connection, and then select **Create**.
-1. On the **Choose the input file or folder** page, complete the following steps:
+ ![Configure the Azure Blob storage account](./media/quickstart-create-data-factory-copy-data-tool/configure-blob-storage.png)
- a. Click **Browse** to navigate to the **adftutorial/input** folder, select the **emp.txt** file, and then click **Choose**.
+ 1. Select the newly created connection in the **Connection** block.
+ 1. In the **File or folder** section, select **Browse** to navigate to the **adftutorial/input** folder, select the **emp.txt** file, and then click **OK**.
+ 1. Select the **Binary copy** checkbox to copy file as-is, and then select **Next**.
- d. Select the **Binary copy** checkbox to copy file as-is, and then select **Next**.
+ :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/source-data-store.png" alt-text="Screenshot that shows the Source data store page.":::
- !["Choose the input file or folder" page](./media/quickstart-create-data-factory-copy-data-tool/select-binary-copy.png)
+1. On the **Destination data store** page, complete the following steps:
+ 1. Select the **AzureBlobStorage** connection that you created in the **Connection** block.
+ 1. In the **Folder path** section, enter **adftutorial/output** for the folder path.
-1. On the **Destination data store** page, select the **Azure Blob Storage** linked service you created, and then select **Next**.
+ :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/destination-data-store.png" alt-text="Screenshot that shows the Destination data store page.":::
-1. On the **Choose the output file or folder** page, enter **adftutorial/output** for the folder path, and then select **Next**.
+ 1. Leave other settings as default and then select **Next**.
- !["Choose the output file or folder" page](./media/quickstart-create-data-factory-copy-data-tool/configure-sink-path.png)
+1. On the **Settings** page, specify a name for the pipeline and its description, then select **Next** to use other default configurations.
-1. On the **Settings** page, select **Next** to use the default configurations.
+ :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/settings.png" alt-text="Screenshot that shows the settings page.":::
1. On the **Summary** page, review all settings, and select **Next**.
In this quickstart, you use the Azure portal to create a data factory. Then, you
!["Deployment complete" page](./media/quickstart-create-data-factory-copy-data-tool/deployment-page.png)
-1. The application switches to the **Monitor** tab. You see the status of the pipeline on this tab. Select **Refresh** to refresh the list. Click the link under **PIPELINE NAME** to view activity run details or rerun the pipeline.
+1. The application switches to the **Monitor** tab. You see the status of the pipeline on this tab. Select **Refresh** to refresh the list. Click the link under **Pipeline name** to view activity run details or rerun the pipeline.
![Refresh pipeline](./media/quickstart-create-data-factory-copy-data-tool/refresh-pipeline.png)
-1. On the Activity runs page, select the **Details** link (eyeglasses icon) under the **ACTIVITY NAME** column for more details about copy operation. For details about the properties, see [Copy Activity overview](copy-activity-overview.md).
+1. On the Activity runs page, select the **Details** link (eyeglasses icon) under the **Activity name** column for more details about copy operation. For details about the properties, see [Copy Activity overview](copy-activity-overview.md).
-1. To go back to the Pipeline Runs view, select the **ALL pipeline runs** link in the breadcrumb menu. To refresh the view, select **Refresh**.
+1. To go back to the Pipeline Runs view, select the **All pipeline runs** link in the breadcrumb menu. To refresh the view, select **Refresh**.
1. Verify that the **emp.txt** file is created in the **output** folder of the **adftutorial** container. If the output folder doesn't exist, the Data Factory service automatically creates it.
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
Save the file in the **C:\ADFv2QuickStartPSH** folder. (If the folder doesn't al
## Review template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-data-factory-v2-blob-to-blob-copy/).
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Datafactory&pageNumber=1&sort=Popular).
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.datafactory/data-factory-v2-blob-to-blob-copy/azuredeploy.json":::
digital-twins How To Ingest Opcua Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-ingest-opcua-data.md
+
+# Mandatory fields.
+ Title: Ingesting OPC UA data with Azure Digital Twins
+
+description: Steps to get your Azure OPC UA data into Azure Digital Twins
++ Last updated : 5/20/2021++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Ingesting OPC UA data with Azure Digital Twins
+
+The [OPC Unified Architecture (OPC UA)](https://opcfoundation.org/about/opc-technologies/opc-ua/) is a platform independent, service-oriented architecture for the manufacturing space. It is used to get telemetry data from devices.
+
+Getting OPC UA Server data to flow into Azure Digital Twins requires multiple components installed on different devices, as well as some custom code and settings that need to be configured.
+
+This article shows how to connect all these pieces together to get your OPC UA nodes into Azure Digital Twins. You can continue to build on this guidance for your own solutions.
+
+> [!NOTE]
+> This article does not address converting OPC UA nodes into DTDL. It only addresses getting the telemetry from your OPC UA Server into existing Azure Digital Twins. If you're interested in generating DTDL models from OPC UA data, visit the [UANodeSetWebViewer](https://github.com/barnstee/UANodesetWebViewer) and [OPCUA2DTDL](https://github.com/khilscher/OPCUA2DTDL) repositories.
+
+## Prerequisites
+
+Before completing this article, complete the following prerequisites:
+* **Download sample repo**: This article uses a [DTDL model](concepts-models.md) file and an Azure function body from the [OPC UA to Azure Digital Twins GitHub Repo](https://github.com/Azure-Samples/opcua-to-azure-digital-twins). Start by downloading the sample repo onto your machine. You can select the **Code** button for the repo to either clone the repository or download it as a .zip file to your machine.
+
+ :::image type="content" source="media/how-to-ingest-opcua-data/download-repo.png" alt-text="Screenshot of the digital-twins-samples repo on GitHub, highlighting the steps to clone or download the code." lightbox="media/how-to-ingest-opcua-data/download-repo.png":::
+
+ If you download the repository as a .zip, be sure to unzip it and extract the files.
+* **Download Visual Studio**: This article uses Visual Studio to publish an Azure function. You can download the latest version of Visual Studio from [Visual Studio Downloads](https://visualstudio.microsoft.com/downloads/).
+
+## Architecture
+
+Here are the components that will be included in this solution.
+
+ :::image type="content" source="media/how-to-ingest-opcua-data/opcua-to-adt-diagram-1.png" alt-text="Drawing of the opc ua to Azure Digital Twins architecture" lightbox="media/how-to-ingest-opcua-data/opcua-to-adt-diagram-1.png":::
+
+| Component | Description |
+| | |
+| OPC UA Server | OPC UA Server from [ProSys](https://www.prosysopc.com/products/opc-ua-simulation-server/) or [Kepware](https://www.kepware.com/en-us/products/#KEPServerEX) to simulate the OPC UA data. |
+| [Azure IoT Edge](../iot-edge/about-iot-edge.md) | IoT Edge is an IoT Hub service that gets installed on a local Linux gateway device. It is required for the OPC Publisher module to run and send data to IoT Hub. |
+| [OPC Publisher](https://github.com/Azure/iot-edge-opc-publisher) | This is an IoT Edge module built by the Azure Industrial IoT team. This module connects to your OPC UA Server and sends the node data into Azure IoT Hub. |
+| [Azure IoT Hub](../iot-hub/about-iot-hub.md) | OPC Publisher sends the OPC UA telemetry into Azure IoT Hub. From there, you can process the data through an Azure Function and into Azure Digital Twins. |
+| Azure Digital Twins | The platform that enables you to create a digital representation of real-world things, places, business processes, and people. |
+| [Azure function](../azure-functions/functions-overview.md) | A custom Azure function is used to process the telemetry flowing in Azure IoT Hub to the proper twins and properties in Azure Digital Twins. |
+
+## Set up edge components
+
+The first step is getting the devices and software set up on the edge. Here are the edge components you'll set up, in this order:
+1. [OPC UA simulation server](#set-up-opc-ua-server)
+1. [IoT Hub and IoT Edge device](#set-up-iot-edge-device)
+1. [Gateway device](#set-up-gateway-device)
+
+This section will walk through a brief setup of each.
+
+For more detailed information on installing each of these pieces, see the following resources:
+* [Step-by-step guide to installing OPC Publisher on Azure IoT Edge](https://www.linkedin.com/pulse/step-by-step-guide-installing-opc-publisher-azure-iot-kevin-hilscher)
+* [Install IoT Edge on Linux](../iot-edge/how-to-install-iot-edge.md)
+* [OPC Publisher on GitHub](https://github.com/Azure/iot-edge-opc-publisher)
+* [Configure OPC Publisher](../iot-accelerators/howto-opc-publisher-configure.md)
+
+### Set up OPC UA Server
+
+For this article, you do not need access to physical devices running a real OPC UA Server. Instead, you can install the free [Prosys OPC UA Simulation Server](https://www.prosysopc.com/products/opc-ua-simulation-server/) on a Windows VM to generate the OPC UA data. This section walks through this setup.
+
+If you already have a physical OPC UA device or another OPC UA simulation server you'd like to use, you can ahead to the next section, [Set up IoT Edge device](#set-up-iot-edge-device).
+
+#### Create Windows 10 virtual machine
+
+The Prosys Software requires a simple virtual resource. Using the [Azure portal](https://portal.azure.com), [create a Windows 10 virtual machine (VM)](../virtual-machines/windows/quick-create-portal.md) with the following specifications:
+* **Availability options**: No infrastructure redundancy required
+* **Image**: Windows 10 Pro, Version 2004 - Gen2
+* **Size**: Standard_B1s - 1 vcpu, 1 GiB memory
++
+Your VM must be reachable over the internet. For simplicity in this walkthrough, you can open all ports and assign the VM a Public IP address. This is done in the **Networking** tab of virtual machine setup.
++
+> [!WARNING]
+> Opening all ports to the internet is not recommended for production solutions, as it can present a security risk. You may want to consider other security strategies for your environment.
+
+Collect the **Public IP** value to use in the next step.
+
+Finish the VM setup.
+
+#### Install OPC UA simulation software
+
+From your new Windows virtual machine, install the [Prosys OPC UA Simulation Server](https://www.prosysopc.com/products/opc-ua-simulation-server/).
+
+Once the download and install are completed, launch the server. It may take a few moments for the OPC UA Server to start. Once it's ready, the Server Status should show as **Running**.
+
+Next, copy the value of **Connection Address (UA TCP)**. Paste it somewhere safe to use later. In the pasted value, replace the machine name part of the address with the **Public IP** of your VM from earlier, like this:
+
+`opc.tcp://<ip-address>:53530/OPCUA/SimulationServer`
+
+You will use this updated value later in this article.
+
+Finally, view the simulation nodes provided by default with the server by selecting the **Objects** tab and expanding the Objects::FolderType and Simulation::FolderType folders. You'll see the simulation nodes, each with its own unique `NodeId` value.
+
+Capture the `NodeId` values for the simulated nodes that you want to publish. You'll need these IDs later in the article to simulate data from these nodes.
+
+> [!TIP]
+> Verify the OPC UA Server is accessible by follow the "Verify the OPC UA Service is running and reachable" steps in the [Step-by-step guide to installing OPC Publisher on Azure IoT Edge](https://www.linkedin.com/pulse/step-by-step-guide-installing-opc-publisher-azure-iot-kevin-hilscher).
+
+#### Verify completion
+
+In this section, you set up the OPC UA Server for simulating data. Verify that you've completed the following checklist:
+
+> [!div class="checklist"]
+> * Prosys Simulation Server is set up and running
+> * You've copied the UA TCP Connection Address (`opc.tcp://<ip-address>:53530/OPCUA/SimulationServer`)
+> * You've captured the list of `NodeId`s for the simulated nodes you want published
+
+### Set up IoT Edge device
+
+In this section, you'll set up an IoT Hub instance and an IoT Edge device.
+
+First, [create an Azure IoT Hub instance](../iot-hub/iot-hub-create-through-portal.md). For this article, you can create an instance in the **F1 - Free** tier.
++
+After you have created the Azure IoT Hub instance, select **IoT Edge** from the instance's left navigation menu, and select **Add an IoT Edge device**.
++
+Follow the prompts to create a new device.
+
+Once your device is created, copy either the **Primary Connection String** or **Secondary Connection String** value. You will need this later when you set up the edge device.
++
+#### Verify completion
+
+In this section, you set up IoT Edge and IoT Hub in preparation to create a gateway device. Verify that you've completed the following checklist:
+> [!div class="checklist"]
+> * IoT Hub instance has been created.
+> * IoT Edge device has been provisioned.
+
+### Set up gateway device
+
+In order to get your OPC UA Server data into IoT Hub, you need a device that runs IoT Edge with the OPC Publisher module. OPC Publisher will then listen to OPC UA node updates and will publish the telemetry into IoT Hub in JSON format.
+
+#### Create Ubuntu Server virtual machine
+
+Using the [Azure portal](https://portal.azure.com), create an Ubuntu Server virtual machine with the following specifications:
+* **Availability options**: No infrastructure redundancy required
+* **Image**: Ubuntu Server 18.04 LTS - Gen1
+* **Size**: Standard_B1ms - 1 vcpu, 2 GiB memory
+ - The default size (Standard_b1s ΓÇô vcpu, 1GiB memory) is too slow for RDP. Updating it to the 2 GiB memory will provide a better RDP experience.
++
+> [!NOTE]
+> If you choose to RDP into your Ubuntu VM, you can follow the instructions to [Install and configure xrdp to use Remote Desktop with Ubuntu](../virtual-machines/linux/use-remote-desktop.md).
+
+#### Install IoT Edge container
+
+Follow the instructions to [Install IoT Edge on Linux](../virtual-machines/linux/use-remote-desktop.md).
+
+Once the installation completes, run the following command to verify the status of your installation:
+
+```bash
+admin@gateway:~$ sudo iotedge check
+```
+
+This command will run several tests to make sure your installation is ready to go.
+
+#### Install OPC Publisher module
+
+Next, install the OPC Publisher module on your gateway device.
+
+Start by getting the module from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft_iot.iotedge-opc-publisher).
++
+Then, follow the installation steps documented in the [OPC Publisher GitHub Repo](https://github.com/Azure/iot-edge-opc-publisher) to install the module on your Ubuntu VM.
+
+In the step for [specifying container create options](https://github.com/Azure/iot-edge-opc-publisher#specifying-container-create-options-in-the-azure-portal), make sure to add the following json:
+
+```JSON
+{
+ "Hostname": "opcpublisher",
+ "Cmd": [
+ "--pf=/appdata/publishednodes.json",
+ "--aa"
+ ],
+ "HostConfig": {
+ "Binds": [
+ "/iiotedge:/appdata"
+ ]
+ }
+}
+```
++
+>[!NOTE]
+>The create options above should work in most cases without any changes, but if you're using your own gateway device that's different from the article guidance so far, you may need to adjust the settings to your situation.
+
+Follow the rest of the prompts to create the module.
+
+After about 15 seconds, you can run the `iotedge list` command on your gateway device, which lists all the modules running on your IoT Edge device. You should see the OPCPublisher module up and running.
++
+Finally, go to the `/iiotedge` directory and create a *publishednodes.json* file. The IDs in the file need to match the `NodeId` values that you [gathered earlier from the OPC Server](#install-opc-ua-simulation-software). Your file should look like something like this:
+
+```JSON
+[
+ {
+ "EndpointUrl": "opc.tcp://20.185.195.172:53530/OPCUA/SimulationServer",
+ "UseSecurity": false,
+ "OpcNodes": [
+ {
+ "Id": "ns=3;i=1001"
+ },
+ {
+ "Id": "ns=3;i=1002"
+ },
+ {
+ "Id": "ns=3;i=1003"
+ },
+ {
+ "Id": "ns=3;i=1004"
+ },
+ {
+ "Id": "ns=3;i=1005"
+ },
+ {
+ "Id": "ns=3;i=1006"
+ }
+ ]
+ }
+]
+```
+
+Save your changes to the *publishednodes.json* file.
+
+Then, run the following command:
+
+```bash
+sudo iotedge logs OPCPublisher -f
+```
+
+The command will result in the output of the OPC Publisher logs. If everything is configured and running correctly, you will see something like the following:
++
+Data should now be flowing from an OPC UA Server into your IoT Hub.
+
+To monitor the messages flowing into Azure IoT hub, you can use the following command:
+
+```azurecli-interactive
+az iot hub monitor-events -n <iot-hub-instance> -t 0
+```
+
+> [!TIP]
+> Try using [Azure IoT Explorer](../iot-pnp/howto-use-iot-explorer.md) to monitor IoT Hub messages.
+
+#### Verify completion
+
+In this section, you set up a gateway device running IoT Edge that will receive data from the OPC UA Server. Verify that you've completed the following checklist:
+> [!div class="checklist"]
+> * Ubuntu Server VM has been created.
+> * IoT Edge has been installed and is on the Ubuntu VM.
+> * OPC Publisher module has been installed.
+> * *publishednodes.json* file has been created and configured.
+> * OPC Publisher module is running, and telemetry data is flowing to IoT Hub.
+
+In the next step, you'll get this telemetry data into Azure Digital Twins.
+
+## Set up Azure Digital Twins
+
+Now that you have data flowing from OPC UA Server into Azure IoT Hub, the next step is to set up and configure Azure Digital Twins.
+
+For this example, you'll use a single model and a single twin instance to match the properties on the simulation server.
+
+>[!TIP]
+>If you're interested in seeing a more complex scenario with more models and twins, view the chocolate factory example in the [OPC UA to Azure Digital Twins GitHub Repo](https://github.com/Azure-Samples/opcua-to-azure-digital-twins).
+
+### Create Azure Digital Twins instance
+
+First, deploy a new Azure Digital Twins instance, using the guidance in [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md).
+
+### Upload model and create twin
+
+Next, add a model and twin to your instance. The model file that you'll upload to the instance is part of the sample project you downloaded in the [Prerequisites](#prerequisites) section, located at *Simulation Example/simulation-dtdl-model.json*.
+
+You can use [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) to upload the Simulation model, and create a new twin called **simulation-1**.
++
+### Verify completion
+
+In this section, you set up an Azure Digital Twins instance containing a model and a twin. Verify that you've completed the following checklist:
+> [!div class="checklist"]
+> * Azure Digital Twins instance has been deployed.
+> * Simulation model has been uploaded into the Azure Digital Twins instance.
+> * simulation-1 twin has been created from the Simulation model.
+
+## Set up Azure function
+
+Now that you have the OPC UA nodes sending data into IoT Hub, and an Azure Digital Twins instance ready to receive the data, you'll need to map and save the data to the correct twin and properties in Azure Digital Twins. In this section, you'll set this up using an Azure function and an *opcua-mapping.json* file.
+
+The data flow in this section involves these steps:
+
+1. An Azure function uses an event subscription to receive messages coming from IoT Hub.
+1. The Azure function processes each telemetry event that arrives. It extracts the `NodeId` from the event, and looks it up against the items in the *opcua-mapping.json file*. The file maps each `NodeId` to a certain `twinId` and property in Azure Digital Twins where the node's value should be saved.
+1. The Azure function generates the appropriate patch document to update the corresponding digital twin, and runs the twin property update command.
+
+### Create opcua-mapping.json file
+
+First, create your *opcua-mapping.json* file. Start with a blank JSON file and fill in entries that map `NodeId` values to `twinId` values and properties in Azure Digital Twins, according to the example and schema below. You will need to create a mapping entry for every `NodeId`.
+
+```JSON
+[
+ {
+ "NodeId": "1001",
+ "TwinId": "simulation",
+ "Property": "Counter",
+ "ModelId": "dtmi:com:microsoft:iot:opcua:simulation;1"
+ },
+ ...
+]
+```
+
+Here is the schema for the entries:
+
+| Property | Description | Required |
+| | | |
+| NodeId | Value from the OPC UA node. For example: ns=3;i={value} | Γ£ö |
+| TwinId | TwinId ($dtId) of the twin you want to save the telemetry value for | Γ£ö |
+| Property | Name of the property on the twin to save the telemetry value | Γ£ö |
+| ModelId | The modelId to create the twin if the TwinId does not exist | |
+
+> [!TIP]
+> For a complete example of a *opcua-mapping.json* file, see the [OPC UA to Azure Digital Twins GitHub repo](https://github.com/Azure-Samples/opcua-to-azure-digital-twins).
+
+When you're finished adding mappings, save the file.
+
+Now that you have your mapping file, you'll need to store it in a location that the Azure function can access. One such location is [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md).
+
+Follow the instructions to [Create a storage container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container), and import the *opcua-mapping.json* file into the container. You can also perform storage management events using [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/).
+
+Next, create a [shared access signature for the container](../storage/common/storage-sas-overview.md) and save that URL. Later, you'll provide the URL to the Azure function so that it can access the stored file.
++
+### Publish Azure function
+
+In this section, you'll publish an Azure function that you downloaded in [Prerequisites](#prerequisites) that will process the OPC UA data and update Azure Digital Twins.
+
+#### Step 1: Open the function in Visual Studio
+
+Navigate to the downloaded [OPC UA to Azure Digital Twins](https://github.com/Azure-Samples/opcua-to-azure-digital-twins) project on your local machine, and into the *Azure Functions/OPCUAFunctions* folder. Open the **OPCUAFunctions.sln** solution in Visual Studio.
+
+#### Step 2: Publish the function
+
+Publish the function project to a function app in Azure.
+
+For instructions on how to do this, see the section [Publish the function app to Azure](how-to-create-azure-function.md#publish-the-function-app-to-azure) of the *How-to: Set up a function for processing data* article.
+
+#### Step 3: Configure the function app
+
+**Assign an access role** for the function and **configure the application settings** so that it can access your Azure Digital Twins instance. For instructions on how to do this, see the section [Set up security access for the function app](how-to-create-azure-function.md#set-up-security-access-for-the-function-app) of the *How-to: Set up a function for processing data* article.
+
+#### Step 4: Add application settings
+
+You'll also need to add some application settings to fully set up your environment. Go to the [Azure portal](https://portal.azure.com) and navigate to your newly created Azure function by searching for its name in the portal search bar.
+
+Select Configuration from the function's left navigation menu. Use the **+ New application setting** button to start creating new settings.
++
+There are three application settings you need to create:
+
+| Setting | Description | Required |
+| | | |
+| ADT_SERVICE_URL | URL for your Azure Digital Twins instance. Example: `https://example.api.eus.digitaltwins.azure.net` | Γ£ö |
+| JSON_MAPPINGFILE_URL | URL of the shared access signature for the opcua-mapping.json | Γ£ö |
+| LOG_LEVEL | Log level verbosity. Default is 100. Verbose is 300 | |
++
+> [!TIP]
+> Set the `LOG_LEVEL` application setting on the function to 300 for a more verbose logging experience.
+
+### Create event subscription
+
+Lastly, create an event subscription to connect your function app and ProcessOPCPublisherEventsToADT function to your IoT Hub. The event subscription is needed so that data can flow from the gateway device into IoT Hub through the function, which then updates Azure Digital Twins.
+
+For instructions, follow the same steps used in [Connect the IoT hub to the Azure function](tutorial-end-to-end.md#connect-the-iot-hub-to-the-azure-function) from the Azure Digital Twins *Tutorial: Connect an end-to-end solution*.
+
+The event subscription will have an Endpoint type of **Azure function**, and an Endpoint of **ProcessOPCPublisherEventsToADT**.
++
+After this step, all required components should be installed and running. Data should be flowing from your OPC UA Simulation Server, through Azure IoT Hub, and into your Azure Digital Twins instance.
+
+The next section provides some Azure CLI commands that you can run to monitor the events and verify everything is working successfully.
+
+### Verify and monitor
+
+The commands in this section can be run in the [Azure Cloud Shell](https://shell.azure.com), or in a [local Azure CLI window](/cli/azure/install-azure-cli).
+
+Run this command to monitor IoT Hub events:
+```azurecli-interactive
+az iot hub monitor-events -n <IoT-hub-name> -t 0
+```
+
+Run this command to monitor Azure function event processing:
+```azurecli-interactive
+az webapp log tail ΓÇôname <function-name> --resource-group <resource-group-name>
+```
+
+Finally, you can use Azure Digital Twins Explorer to manually monitor twin property updates.
++
+### Verify completion
+
+In this section, you set up an Azure function to connect the OPC UA data to Azure Digital Twins. Verify that you've completed the following checklist:
+> [!div class="checklist"]
+> * Created and imported *opcua-mapping.json* file into a blob storage container.
+> * Published the sample function ProcessOPCPublisherEventsToADT to a function app in Azure.
+> * Added three new application settings to the Azure Functions app.
+> * Created an event subscription to send IoT Hub events to the function app.
+> * Used Azure CLI commands to verify the final data flow
+
+## Next steps
+
+In this article, you set up a full data flow for getting simulated OPC UA Server data into Azure Digital Twins, where it updates a property on a digital twin.
+
+Next, use the following resources to read more about the supporting tools and processes that were used in this article:
+
+* [Step-by-step guide to installing OPC Publisher on Azure IoT Edge](https://www.linkedin.com/pulse/step-by-step-guide-installing-opc-publisher-azure-iot-kevin-hilscher)
+* [Install IoT Edge on Linux](../iot-edge/how-to-install-iot-edge.md)
+* [OPC Publisher](https://github.com/Azure/iot-edge-opc-publisher)
+* [Configure OPC Publisher](../iot-accelerators/howto-opc-publisher-configure.md)
+* [UANodeSetWebViewer](https://github.com/barnstee/UANodesetWebViewer)
+* [OPCUA2DTDL](https://github.com/khilscher/OPCUA2DTDL)
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/resource-scenario-status.md
The following table shows Azure Database Migration Service support for offline m
| **Azure SQL VM** | SQL Server | Γ£ö | GA | | | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA |
-| **Azure DB for MySQL** | MySQL | Γ£ö | |
-| | RDS MySQL | X | |
+| **Azure DB for MySQL - Single Server** | MySQL | Γ£ö | |
+| | RDS MySQL | Γ£ö | |
+| | Azure DB for MySQL* | Γ£ö | |
+| **Azure DB for MySQL - Flexible Server** | MySQL | Γ£ö | |
+| | RDS MySQL | Γ£ö | |
+| | Azure DB for MySQL* | Γ£ö | |
| **Azure DB for PostgreSQL - Single server** | PostgreSQL | X | | | RDS PostgreSQL | X | |
-| | Oracle | X | |
+| **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | X |
+| | RDS PostgreSQL | X | |
| **Azure DB for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | X | | | RDS PostgreSQL | X | |
The following table shows Azure Database Migration Service support for online mi
| **Azure DB for MySQL** | MySQL | Γ£ö | GA | | | RDS MySQL | Γ£ö | GA | | **Azure DB for PostgreSQL - Single server** | PostgreSQL | Γ£ö | GA |
-| | Azure DB for PostgreSQL - Single server | Γ£ö | GA |
+| | Azure DB for PostgreSQL - Single server* | Γ£ö | GA |
+| | RDS PostgreSQL | Γ£ö | GA |
+| **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | Γ£ö | GA |
+| | Azure DB for PostgreSQL - Single server* | Γ£ö | GA |
| | RDS PostgreSQL | Γ£ö | GA |
-| | Oracle | X | |
| **Azure DB for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | Γ£ö | GA | | | RDS PostgreSQL | Γ£ö | GA |
+> [!NOTE]
+> If your source database is already in Azure PaaS (EG: Azure DB for MySQL or Azure DB for PostgreSQL), choose the corresponding engine when creating your migration activity. For example, if you are migrating from Azure DB for MySQL - Single Server to Azure DB for MySQL - Flexible Server, choose MySQL as the source engine during scenario creation. If you are migrating from Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine during scenario creation.
## Next steps
dms Tutorial Mysql Azure Mysql Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md
To complete this tutorial, you need to:
* Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). * Have an on-premises MySQL database with version 5.7. If not, then download and install [MySQL community edition](https://dev.mysql.com/downloads/mysql/) 5.7.
+* The MySQL Offline migration is supported only on the Premium DMS SKU.
* [Create an instance in Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). Refer to the article [Use MySQL Workbench to connect and query data](../mysql/connect-workbench.md) for details about how to connect and create a database using the Workbench application. The Azure Database for MySQL version should be equal to or higher than the on-premises MySQL version . For example, MySQL 5.7 can migrate to Azure Database for MySQL 5.7 or upgraded to 8. * Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
Registration of the resource provider needs to be done on each Azure subscriptio
3. On the **Create Migration Service** screen, specify a name for the service, the subscription, and a new or existing resource group.
-4. Select a pricing tier and move to the networking screen. Offline migration capability is available in both Standard and Premium pricing tier.
+4. Select a pricing tier and move to the networking screen. Offline migration capability is available only on the Premium pricing tier.
For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-filtering.md
Key is the field in the event data that you're using for filtering. It can be on
- Number - Boolean - String-- Array. You need to set the `enableAdvancedFilteringOnArrays` property to true to use this feature. Currently, the Azure portal doesn't support enabling this feature.
+- Array. You need to set the `enableAdvancedFilteringOnArrays` property to true to use this feature.
```json "filter":
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-premium-overview.md
For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas
## Next steps
-You can start using Event Hubs Premium (Preview) via [Azure portal](https://aka.ms/eventhubsclusterquickstart). Refer [Event Hubs Premium pricing](https://azure.microsoft.com/pricing/details/event-hubs/) for more details on pricing and [Event Hubs FAQ](event-hubs-faq.yml) to find answers to some frequently asked questions about Event Hubs.
+You can start using Event Hubs Premium (Preview) via [Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). Refer [Event Hubs Premium pricing](https://azure.microsoft.com/pricing/details/event-hubs/) for more details on pricing and [Event Hubs FAQ](event-hubs-faq.yml) to find answers to some frequently asked questions about Event Hubs.
iot-hub-device-update Device Update Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-resources.md
During public preview, two Device update accounts can be created per subscriptio
## Configuring Device update linked IoT Hub
-In order for Device Update to receive change notifications from IoT Hub, Device Update integrates with the "Built-In" Event Hub. Clicking the "Configure IoT Hub" button within your instance configures the required message routes and access policy required to communicate with IoT devices.
+In order for Device Update to receive change notifications from IoT Hub, Device Update integrates with the "Built-In" Event Hub. Clicking the "Configure IoT Hub" button within your instance configures the required message routes, consumer groups and access policy required to communicate with IoT devices.
+
+### Message Routing
The following Message Routes are configured for Device Update:
-| Route Name | Routing Query | Description |
-| : | :- |:- |
-| DeviceUpdate.DigitalTwinChanges | true |Listens for Digital Twin Changes Events |
-| DeviceUpdate.DeviceLifeCycle | opType = 'deleteDeviceIdentity' | Listens for Devices that have been deleted |
-| DeviceUpdate.TelemetryModelInformation | iothub-interface-id = "urn:azureiot:ModelDiscovery:ModelInformation:1 | Listens for new devices types |
-| DeviceUpdate.DeviceTwinEvents| (opType = 'updateTwin' OR opType = 'replaceTwin') AND IS_DEFINED($body.tags.ADUGroup) | Listens for new Device Update Groups |
+| Route Name | Data Source | Routing Query | Endpoint | Description |
+| : | :- |:- |:- |:- |
+| DeviceUpdate.DigitalTwinChanges | DigitalTwinChangeEvents | true | events | Listens for Digital Twin Changes Events |
+| DeviceUpdate.DeviceLifeCycle | DeviceLifeCycleEvents | opType = 'deleteDeviceIdentity' OR opType = 'deleteModuleIdentity' | events | Listens for Devices that have been deleted |
+| DeviceUpdate.DeviceTwinEvents| TwinChangeEvents | (opType = 'updateTwin' OR opType = 'replaceTwin') AND IS_DEFINED($body.tags.ADUGroup) | events | Listens for new Device Update Groups |
+
+> [!NOTE]
+> Route names don't really matter when configuring these routes. We are including DeviceUpdate as a prefix to make the names consistent and easily identifiable that they are being used for Device Update. The rest of the route properties should be configured as they are in the table below for the Device Update to work properly.
+
+### Consumer Group
+
+Configuring the IoT Hub also creates an event hub consumer group that is required by the Device Update Management services.
++
+### Access Policy
+
+A shared access policy named "deviceupdateservice" is required by the Device Update Management services to query for update-capable devices. The "deviceupdateservice" policy is created and given the following permissions as part of configuring the IoT Hub:
+- Registry Read
+- Service Connect
+- Device Connect
+ ## Next steps
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Each IoT hub is provisioned with a certain number of units in a specific tier. T
The tier also determines the throttling limits that IoT Hub enforces on all operations.
-## IoT Plug and Play
-
-IoT Plug and Play devices send at least one telemetry message for each interface, including the root, which may increase the number of messages counted towards your message quota.
- ## Operation throttles Operation throttles are rate limitations that are applied in minute ranges and are intended to prevent abuse. They're also subject to [traffic shaping](#traffic-shaping).
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-data-encryption.md
The `hbi_workspace` flag controls the amount of [data Microsoft collects for dia
* Starts encrypting the local scratch disk in your Azure Machine Learning compute cluster provided you have not created any previous clusters in that subscription. Else, you need to raise a support ticket to enable encryption of the scratch disk of your compute clusters * Cleans up your local scratch disk between runs * Securely passes credentials for your storage account, container registry, and SSH account from the execution layer to your compute clusters using your key vault
-* Enables IP filtering to ensure the underlying batch pools cannot be called by any external services other than AzureMachineLearningService
-* Compute instances are supported in HBI workspace
### Azure Blob storage
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-high-availability-machine-learning.md
A multi-regional deployment relies on creation of Azure Machine Learning and oth
* __Regional availability__: Use regions that are close to your users. To check regional availability for Azure Machine Learning, see [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/). * __Azure paired regions__: Paired regions coordinate platform updates and prioritize recovery efforts where needed. For more information, see [Azure paired regions](../best-practices-availability-paired-regions.md).
-* __Service availability__: Decide whether the resources used by your solution should be hot/hot, hot/warm, or hot/code.
+* __Service availability__: Decide whether the resources used by your solution should be hot/hot, hot/warm, or hot/cold.
* __Hot/hot__: Both regions are active at the same time, with one region ready to begin use immediately. * __Hot/warm__: Primary region active, secondary region has critical resources (for example, deployed models) ready to start. Non-critical resources would need to be manually deployed in the secondary region.
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-training-targets.md
See these notebooks for examples of configuring runs for various training scenar
## Troubleshooting
+* **AttributeError: 'RoundTripLoader' object has no attribute 'comment_handling'**: This error comes from the new version (v0.17.5) of `ruamel-yaml`, an `azureml-core` dependency, that introduces a breaking change to `azureml-core`. In order to fix this error, please uninstall `ruamel-yaml` by running `pip uninstall ruamel-yaml` and installing a different version of `ruamel-yaml`; the supported versions are v0.15.35 to v0.17.4 (inclusive). You can do this by running `pip install "ruamel-yaml>=0.15.35,<0.17.5"`.
++ * **Run fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`. Consider upgrading to the latest version of azureml-core: `pip install -U azureml-core`.
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-managed-identities.md
You can also use [an ARM template](https://github.com/Azure/azure-quickstart-tem
> [!IMPORTANT] > If you bring your own associated resources, instead of having Azure Machine Learning service create them, you must grant the managed identity roles on those resources. Use the [role assignment ARM template](https://github.com/Azure/azure-quickstart-templates/tree/master/201-machine-learning-dependencies-role-assignment) to make the assignments.
-For a workspace with (customer-managed keys for encryption)[https://docs.microsoft.com/azure/machine-learning/concept-data-encryption], you can pass in a user-assigned managed identity to authenticate from storage to Key Vault. Use argument
+For a workspace with [customer-managed keys for encryption](concept-data-encryption.md), you can pass in a user-assigned managed identity to authenticate from storage to Key Vault. Use argument
__user-assigned-identity-for-cmk-encryption__ (CLI) or __user_assigned_identity_for_cmk_encryption__ (SDK) to pass in the managed identity. This managed identity can be the same or different as the workspace primary user assigned managed identity. If you have an existing workspace, you can update it from system-assigned to user-assigned managed identity using ```az ml workspace update``` CLI command, or ```Workspace.update``` Python SDK method.
If you have an existing workspace, you can update it from system-assigned to use
## Next steps
-* Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md).
+* Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md).
machine-learning Tutorial Train Models With Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-train-models-with-aml.md
You complete the following experiment setup and run steps in Azure Machine Learn
### <a name="open"></a> Open the cloned notebook
-1. Open the **tutorials** folder that was closed into your **User files** section.
+1. Open the **tutorials** folder that was cloned into your **User files** section.
> [!IMPORTANT] > You can view notebooks in the **samples** folder but you can't run a notebook from there. To run a notebook, make sure you open the cloned version of the notebook in the **User Files** section.
media-services Filters Dynamic Manifest Dotnet How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/filters-dynamic-manifest-dotnet-how-to.md
This topic shows how to use Media Services .NET SDK to define a filter for a Vid
- Review [Filters and dynamic manifests](filters-dynamic-manifest-concept.md). - [Create a Media Services account](./account-create-how-to.md). Make sure to remember the resource group name and the Media Services account name. - Get information needed to [access APIs](./access-api-howto.md)-- Review [Upload, encode, and stream using Azure Media Services](stream-files-tutorial-with-api.md) to see how to [start using .NET SDK](stream-files-tutorial-with-api.md#start-using-media-services-apis-with-net-sdk)
+- Review [Upload, encode, and stream using Azure Media Services](stream-files-tutorial-with-api.md) to see how to [start using .NET SDK](stream-files-tutorial-with-api.md#start-using-media-services-apis-with-the-net-sdk)
## Define a filter
purview Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/purview-connector-overview.md
details.
||[Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)|Yes| Yes| Yes| Yes| Yes| Yes| ||[Azure SQL Database](register-scan-azure-sql-database.md)|Yes| Yes| No| Yes| Yes| Yes| ||[Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md)|Yes| Yes| No| Yes| Yes| Yes|
-||[Azure Synapse Analytics (formerly SQL DW)](register-scan-azure-synapse-analytics.md)|Yes| Yes| No| Yes| Yes| Yes|
+||[Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)|Yes| Yes| No| Yes| Yes| Yes|
+||[Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)|Yes| Yes| No| Yes| Yes| Yes|
|Database|[Hive Metastore DB](register-scan-oracle-source.md)|Yes| Yes| No| No| No| Yes| ||[Oracle DB](register-scan-oracle-source.md)|Yes| Yes| No| No| No| Yes| ||[SQL Server](register-scan-on-premises-sql-server.md)|Yes| Yes| No| Yes| Yes| Yes|
details.
|Power BI|[Power BI](register-scan-power-bi-tenant.md)|Yes| Yes| No| No| No| Yes| |Services and apps|[SAP ECC](register-scan-sapecc-source.md)|Yes| Yes| No| Yes| Yes| Yes| ||[SAP S4HANA](register-scan-saps4hana-source.md)|Yes| Yes| No| Yes| Yes| Yes|
+|Multi-cloud|[Amazon S3](register-scan-amazon-s3.md)|Yes| Yes| No| No| No| Yes|
## Scan regions The following is a list of all the Azure data source (data center) regions where the Purview scanner runs. If your Azure data source is in a region outside of this list, the scanner will run in the region of your Purview instance.
sentinel Import Export Analytics Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/import-export-analytics-rules.md
+
+ Title: Import and export Azure Sentinel analytics rules | Microsoft Docs
+description: Export and import analytics rules to and from ARM templates to aid deployment
+
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
+
+ na
+ Last updated : 05/30/2021+++
+# Export and import analytics rules to and from ARM templates
+
+> [!IMPORTANT]
+>
+> - Exporting and importing rules is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Introduction
+
+You can now export your analytics rules to Azure Resource Manager (ARM) template files, and import rules from these files, as part of managing and controlling your Azure Sentinel deployments as code. The export action will create a JSON file (named *Azure_Sentinel_analytic_rule.json*) in your browser's downloads location, that you can then rename, move, and otherwise handle like any other file.
+
+The exported JSON file is workspace-independent, so it can be imported to other workspaces and even other tenants. As code, it can also be version-controlled, updated, and deployed in a managed CI/CD framework.
+
+The file includes all the parameters defined in the analytics rule, so for **Scheduled** rules it includes the underlying query and its accompanying scheduling settings, the severity, incident creation, event- and alert-grouping settings, assigned MITRE ATT&CK tactics, and more. Any type of analytics rule - not just **Scheduled** - can be exported to a JSON file.
+
+## Export rules
+
+1. From the Azure Sentinel navigation menu, select **Analytics**.
+
+1. Select the rule you want to export and click **Export** from the bar at the top of the screen.
+
+ :::image type="content" source="./media/import-export-analytics-rules/export-rule.png" alt-text="Export analytics rule" lightbox="./media/import-export-analytics-rules/export-rule.png":::
+
+ > [!NOTE]
+ > - You can select multiple analytics rules at once for export by marking the check boxes next to the rules and clicking **Export** at the end.
+ >
+ > - You can export all the rules on a single page of the display grid at once, by marking the check box in the header row (next to **SEVERITY**) before clicking **Export**. You can't export more than one page's worth of rules at a time, though.
+ >
+ > - Be aware that in this scenario, a single file (named *Azure_Sentinel_analytic_**rules**.json*) will be created, and will contain JSON code for all the exported rules.
+
+## Import rules
+
+1. Have an analytics rule ARM template JSON file ready.
+
+1. From the Azure Sentinel navigation menu, select **Analytics**.
+
+1. Click **Import** from the bar at the top of the screen. In the resulting dialog box, navigate to and select the JSON file representing the rule you want to import, and select **Open**.
+
+ :::image type="content" source="./media/import-export-analytics-rules/import-rule.png" alt-text="Import analytics rule" lightbox="./media/import-export-analytics-rules/import-rule.png":::
+
+ > [!NOTE]
+ > You can import **up to 50** analytics rules from a single ARM template file.
+
+## Next steps
+
+In this document, you learned how to export and import analytics rules to and from ARM templates.
+- Learn more about [analytics rules](tutorial-detect-threats-built-in.md), including [custom scheduled rules](tutorial-detect-threats-custom.md).
+- Learn more about [ARM templates](../azure-resource-manager/templates/overview.md).
sentinel Tutorial Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-detect-threats-built-in.md
Several new scheduled analytics rule templates produce alerts that are correlate
For more details on how to customize your rules in the rule creation wizard, see [Tutorial: Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
+## Export rules to an ARM template
+
+You can easily [export your rule to an Azure Resource Manager (ARM) template](import-export-analytics-rules.md) if you want to manage and deploy your rules as code. You can also import rules from template files in order to view and edit them in the user interface.
+ ## Next steps In this tutorial, you learned how to get started detecting threats using Azure Sentinel.
sentinel Tutorial Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-detect-threats-custom.md
In the **Alert grouping** section, if you want a single incident to be generated
> [!NOTE] > Alerts generated in Azure Sentinel are available through [Microsoft Graph Security](/graph/security-concept-overview). For more information, see the [Microsoft Graph Security alerts documentation](/graph/api/resources/security-api-overview).
+## Export the rule to an ARM template
+
+If you want to package your rule to be managed and deployed as code, you can easily [export the rule to an Azure Resource Manager (ARM) template](import-export-analytics-rules.md). You can also import rules from template files in order to view and edit them in the user interface.
+ ## Troubleshooting ### Issue: No events appear in query results
spring-cloud Quickstart Deploy Infrastructure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/quickstart-deploy-infrastructure-vnet.md
+
+ Title: Quickstart - Provision Azure Spring Cloud using an Azure Resource Manager template (ARM template)
+description: This quickstart shows you how to deploy a Spring Cloud cluster into an existing virtual network.
++++++ Last updated : 05/27/2021++
+# Quickstart: Provision Azure Spring Cloud using an ARM template
+
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to deploy an Azure Spring Cloud cluster into an existing virtual network.
+
+Azure Spring Cloud makes it easy to deploy Spring Boot microservice applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-spring-cloud-reference-architecture%2Fmain%2FARM%2Fbrownfield-deployment%2fazuredeploy.json)
+
+## Prerequisites
+
+* An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* Two dedicated subnets for the Azure Spring Cloud Cluster, one for the service runtime and another for the Spring Boot micro-service applications. For subnet and virtual network requirements, see the [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* An existing Log Analytics workspace for Azure Spring Cloud diagnostics settings and a workspace-based Application Insights resource. For more information, see [Analyze logs and metrics with diagnostics settings](diagnostic-services.md) and [Use distributed tracing with Azure Spring Cloud](how-to-distributed-tracing.md).
+* Three internal Classless Inter-Domain Routing (CIDR) ranges (at least */16* each) that you've identified for use by the Azure Spring Cloud cluster. These CIDR ranges will not be directly routable and will be used only internally by the Azure Spring Cloud Cluster. Clusters may not use *169.254.0.0/16*, *172.30.0.0/16*, *172.31.0.0/16*, or *192.0.2.0/24* for the internal Spring Cloud CIDR ranges, or any IP ranges included within the cluster virtual network address range.
+* Service permission granted to the virtual network. The Azure Spring Cloud Resource Provider requires Owner permission to your virtual network in order to grant a dedicated and dynamic service principal on the virtual network for further deployment and maintenance. For instructions and more information, see the [Grant service permission to the virtual network](how-to-deploy-in-azure-virtual-network.md#grant-service-permission-to-the-virtual-network) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+* If you're using Azure Firewall or a Network Virtual Appliance (NVA), you'll also need to satisfy the following prerequisites:
+ * Network and fully qualified domain name (FQDN) rules. For more information, see [Virtual network requirements](how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements).
+ * A unique User Defined Route (UDR) applied to each of the service runtime and Spring Boot micro-service application subnets. For more information about UDRs, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). The UDR should be configured with a route for *0.0.0.0/0* with a destination of your NVA before deploying the Spring Cloud cluster. For more information, see the [Bring your own route table](how-to-deploy-in-azure-virtual-network.md#bring-your-own-route-table) section of [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+
+## Review the template
+
+The template used in this quickstart is from [Azure Spring Cloud reference architecture](reference-architecture.md).
++
+Two Azure resources are defined in the template:
+
+* [Microsoft.AppPlatform/Spring](/azure/templates/microsoft.appplatform/spring): Create an Azure Spring Cloud instance.
+* [Microsoft.Insights/components](/azure/templates/microsoft.insights/components): Create an Application Insights workspace.
+
+For Azure CLI and Terraform deployments, see the [Azure Spring Cloud Reference Architecture](https://github.com/Azure/azure-spring-cloud-reference-architecture) repository on GitHub.
+
+## Deploy the template
+
+To deploy the template, follow these steps:
+
+1. Select the following image to sign in to Azure and open a template. The template creates an Azure Spring Cloud instance into an existing Virtual Network and a workspace-based Application Insights instance into an existing Azure Monitor Log Analytics Workspace.
+
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-spring-cloud-reference-architecture%2Fmain%2FARM%2Fbrownfield-deployment%2fazuredeploy.json)
+
+2. Enter Values for the following fields:
+
+- **Resource Group:** select **Create new**, enter a unique name for the **resource group**, and then select **OK**.
+- **springCloudInstanceName:** Enter the name of the Azure Spring Cloud resource.
+- **appInsightsName:** Enter the name of the Application Insights instance for Azure Spring Cloud.
+- **laWorkspaceResourceId:** Enter the resource ID of the existing Log Analytics workspace (for example, */subscriptions/<your subscription>/resourcegroups/<your log analytics resource group>/providers/Microsoft.OperationalInsights/workspaces/<your log analytics workspace name>*.)
+- **springCloudAppSubnetID:** Enter the resourceID of the Azure Spring Cloud App Subnet.
+- **springCloudRuntimeSubnetID:** Enter the resourceID of the Azure Spring Cloud Runtime Subnet.
+- **springCloudServiceCidrs:** Enter a comma-separated list of IP address ranges (3 in total) in CIDR format. The IP ranges are reserved to host underlying Azure Spring Cloud infrastructure. These 3 ranges should be at least */16* unused IP ranges, and must not overlap with any routable subnet IP ranges used within the network.
+- **tags:** Enter any custom tags.
+
+3. Select **Review + Create** and then **Create**.
+
+## Review deployed resources
+
+You can either use the Azure portal to check the deployed resources, or use Azure CLI or Azure PowerShell script to list the deployed resources.
+
+## Clean up resources
+
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI or Azure PowerShell, use the following commands:
+
+# [CLI](#tab/azure-cli)
+
+```azurecli-interactive
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group delete --name $resourceGroupName &&
+echo "Press [ENTER] to continue ..."
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+Write-Host "Press [ENTER] to continue..."
+```
++
+## Next steps
+
+In this quickstart, you deployed an Azure Spring Cloud instance into an existing virtual network using an ARM template, and then validated the deployment. To learn more about Azure Spring Cloud and Azure Resource Manager, continue on to the resources below.
+
+- Deploy one of the following sample applications from the locations below:
+ * [Pet Clinic App with MySQL Integration](https://github.com/azure-samples/spring-petclinic-microservices) (Microservices with MySQL backend).
+ * [Simple Hello World](spring-cloud-quickstart.md?tabs=Azure-CLI&pivots=programming-language-java).
+- Use [custom domains](tutorial-custom-domain.md) with Azure Spring Cloud.
+- Expose Azure Spring Cloud applications to the internet using [Azure Application Gateway](expose-apps-gateway-azure-firewall.md).
+- View the secure end-to-end [Azure Spring Cloud reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+- Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md).
static-web-apps Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/apis.md
Azure Static Web Apps provides serverless API endpoints via [Azure Functions](../azure-functions/functions-overview.md). By using Azure Functions, APIs dynamically scale based on demand, and include the following features: - **Integrated security** with direct access to user [authentication and role-based authorization](user-information.md) data.+ - **Seamless routing** that makes the _api_ route available to the web app securely without requiring custom CORS rules. Azure Static Web Apps APIs are supported by two possible configurations:
The following table contrasts the differences between using managed and existing
| Feature | Managed Functions | Bring your own Functions | | | | |
-| Access to Azure Functions triggers | Http only | [All](../azure-functions/functions-triggers-bindings.md#supported-bindings) |
-| Supported runtimes | Node.js<br>.NET<br>Python | [All](../azure-functions/supported-languages.md#languages-by-runtime-version) |
+| Access to Azure Functions [triggers](../azure-functions/functions-triggers-bindings.md#supported-bindings) | Http only | All |
+| Supported Azure Functions [runtimes](../azure-functions/supported-languages.md#languages-by-runtime-version) | Node.js 12<br>.NET Core 3.1<br>Python 3.8 | All |
| Supported Azure Functions [hosting plans](../azure-functions/functions-scale.md) | Consumption | Consumption<br>Premium<br>Dedicated | | [Integrated security](user-information.md) with direct access to user authentication and role-based authorization data | Γ£ö | Γ£ö | | [Routing integration](./configuration.md?#routes) that makes the _api_ route available to the web app securely without requiring custom CORS rules. | Γ£ö | Γ£ö |
Logs are only available if you add [Application Insights](monitor.md).
| Managed functions | Bring your own functions | | | |
-| <ul><li>Triggers are limited to [HTTP](../azure-functions/functions-bindings-http-webhook.md).</li><li>Managed identity and Azure Key Vault references require the [Standard plan](plans.md).</li><li>Some application settings are managed by the service, therefore the following prefixes are reserved by the runtime:<ul><li>*APPSETTING\_, AZUREBLOBSTORAGE\_, AZUREFILESSTORAGE\_, AZURE_FUNCTION\_, CONTAINER\_, DIAGNOSTICS\_, DOCKER\_, FUNCTIONS\_, IDENTITY\_, MACHINEKEY\_, MAINSITE\_, MSDEPLOY\_, SCMSITE\_, SCM\_, WEBSITES\_, WEBSITE\_, WEBSOCKET\_, AzureWeb*</li></ul></li></ul> | <ul><li>The Azure Functions app must either be in Node.js 12, .NET Core 3.1, or Python 3.8.</li><li>You are responsible to manage the Functions app deployment.</li></ul> |
+| <ul><li>Triggers are limited to [HTTP](../azure-functions/functions-bindings-http-webhook.md).</li><li>The Azure Functions app must either be in Node.js 12, .NET Core 3.1, or Python 3.8.</li><li>Some application settings are managed by the service, therefore the following prefixes are reserved by the runtime:<ul><li>*APPSETTING\_, AZUREBLOBSTORAGE\_, AZUREFILESSTORAGE\_, AZURE_FUNCTION\_, CONTAINER\_, DIAGNOSTICS\_, DOCKER\_, FUNCTIONS\_, IDENTITY\_, MACHINEKEY\_, MAINSITE\_, MSDEPLOY\_, SCMSITE\_, SCM\_, WEBSITES\_, WEBSITE\_, WEBSOCKET\_, AzureWeb*</li></ul></li></ul> | <ul><li>You are responsible to manage the Functions app deployment.</li></ul> |
## Next steps
storage Data Lake Storage Directory File Acl Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-directory-file-acl-cli.md
az storage fs file list -f my-file-system --path my-directory --account-name mys
## Upload a file to a directory
-Upload a file to a directory by using the `az storage fs directory upload` command.
+Upload a file to a directory by using the `az storage fs file upload` command.
This example uploads a file named `upload.txt` to a directory named `my-directory`.
az storage fs file delete -p my-directory/my-file.txt -f my-file-system --accou
- [Samples](https://github.com/Azure/azure-cli/blob/dev/src/azure-cli/azure/cli/command_modules/storage/docs/ADLS%20Gen2.md) - [Give feedback](https://github.com/Azure/azure-cli-extensions/issues) - [Known issues](data-lake-storage-known-issues.md#api-scope-data-lake-client-library)-- [Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-cli.md)
+- [Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2](data-lake-storage-acl-cli.md)
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/encryption-scope-overview.md
Previously updated : 05/10/2021 Last updated : 06/01/2021
When you define an encryption scope, you can specify whether the scope is protec
If you define an encryption scope with a customer-managed key, then you can choose to update the key version either automatically or manually. If you choose to automatically update the key version, then Azure Storage checks the key vault or managed HSM daily for a new version of the customer-managed key and automatically updates the key to the latest version. For more information about updating the key version for a customer-managed key, see [Update the key version](../common/customer-managed-keys-overview.md#update-the-key-version).
+Azure Policy provides a built-in policy to require that encryption scopes use customer-managed keys. For more information, see the **Storage** section in [Azure Policy built-in policy definitions](../../governance/policy/samples/built-in-policies.md#storage).
+ A storage account may have up to 10,000 encryption scopes that are protected with customer-managed keys for which the key version is automatically updated. If your storage account already has 10,000 encryption scopes that are protected with customer-managed keys that are being automatically updated, then the key version must be updated manually for any additional encryption scopes that are protected with customer-managed keys. ### Infrastructure encryption
Keep in mind that customer-managed keys are protected by soft delete and purge p
> [!IMPORTANT] > It is not possible to delete an encryption scope. ++ ## Next steps - [Azure Storage encryption for data at rest](../common/storage-service-encryption.md)
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-overview.md
The following table describes the expected behavior for delete and write operati
| [Delete Storage Account](/rest/api/storagerp/storageaccounts/delete) | No change. Containers and blobs in the deleted account are not recoverable. | No change. Containers and blobs in the deleted account are not recoverable. | | [Delete Container](/rest/api/storageservices/delete-container) | No change. Blobs in the deleted container are not recoverable. | No change. Blobs in the deleted container are not recoverable. | | [Delete Blob](/rest/api/storageservices/delete-blob) | If used to delete a blob, that blob is marked as soft deleted. <br /><br /> If used to delete a blob snapshot, the snapshot is marked as soft deleted. | If used to delete a blob, the current version becomes a previous version, and the current version is deleted. No new version is created and no soft-deleted snapshots are created.<br /><br /> If used to delete a blob version, the version is marked as soft deleted. |
-| [Undelete Blob](/rest/api/storageservices/delete-blob) | Restores a blob and any snapshots that were deleted within the retention period. | Restores a blob and any versions that were deleted within the retention period. |
+| [Undelete Blob](/rest/api/storageservices/undelete-blob) | Restores a blob and any snapshots that were deleted within the retention period. | Restores a blob and any versions that were deleted within the retention period. |
| [Put Blob](/rest/api/storageservices/put-blob)<br />[Put Block List](/rest/api/storageservices/put-block-list)<br />[Copy Blob](/rest/api/storageservices/copy-blob)<br />[Copy Blob from URL](/rest/api/storageservices/copy-blob) | If called on an active blob, then a snapshot of the blob's state prior to the operation is automatically generated. <br /><br /> If called on a soft-deleted blob, then a snapshot of the blob's prior state is generated only if it is being replaced by a blob of the same type. If the blob is of a different type, then all existing soft deleted data is permanently deleted. | A new version that captures the blob's state prior to the operation is automatically generated. | | [Put Block](/rest/api/storageservices/put-block) | If used to commit a block to an active blob, there is no change.<br /><br />If used to commit a block to a blob that is soft-deleted, a new blob is created and a snapshot is automatically generated to capture the state of the soft-deleted blob. | No change. | | [Put Page](/rest/api/storageservices/put-page)<br />[Put Page from URL](/rest/api/storageservices/put-page-from-url) | No change. Page blob data that is overwritten or cleared using this operation is not saved and is not recoverable. | No change. Page blob data that is overwritten or cleared using this operation is not saved and is not recoverable. |
Data that is overwritten by a call to **Put Page** is not recoverable. An Azure
- [Enable soft delete for blobs](./soft-delete-blob-enable.md) - [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md)-- [Blob versioning](versioning-overview.md)
+- [Blob versioning](versioning-overview.md)
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 05/19/2021 Last updated : 06/01/2021
The managed identity that's associated with the storage account must have these
For more information about key permissions, see [Key types, algorithms, and operations](../../key-vault/keys/about-keys-details.md#key-access-control).
+Azure Policy provides a built-in policy to require that storage accounts use customer-managed keys for Blob Storage and Azure Files workloads. For more information, see the **Storage** section in [Azure Policy built-in policy definitions](../../governance/policy/samples/built-in-policies.md#storage).
+ ## Customer-managed keys for queues and tables Data stored in Queue and Table storage is not automatically protected by a customer-managed key when customer-managed keys are enabled for the storage account. You can optionally configure these services to be included in this protection at the time that you create the storage account.
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/infrastructure-encryption-enable.md
Previously updated : 05/11/2021 Last updated : 06/01/2021
The following JSON example creates a general-purpose v2 storage account that is
+Azure Policy provides a built-in policy to require that infrastructure encryption be enabled for a storage account. For more information, see the **Storage** section in [Azure Policy built-in policy definitions](../../governance/policy/samples/built-in-policies.md#storage).
++ ## Create an encryption scope with infrastructure encryption enabled If infrastructure encryption is enabled for an account, then any encryption scope created on that account automatically uses infrastructure encryption. If infrastructure encryption is not enabled at the account level, then you have the option to enable it for an encryption scope at the time that you create the scope. The infrastructure encryption setting for an encryption scope cannot be changed after the scope is created. For more information, see [Create an encryption scope](../blobs/encryption-scope-manage.md#create-an-encryption-scope).
storage Storage Require Secure Transfer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-require-secure-transfer.md
Previously updated : 04/27/2021 Last updated : 06/01/2021
You can configure your storage account to accept requests from secure connections only by setting the **Secure transfer required** property for the storage account. When you require secure transfer, any requests originating from an insecure connection are rejected. Microsoft recommends that you always require secure transfer for all of your storage accounts.
-When secure transfer is required, a call to an Azure Storage REST API operation must be made over HTTPS. Any request made over HTTP is rejected.
+When secure transfer is required, a call to an Azure Storage REST API operation must be made over HTTPS. Any request made over HTTP is rejected. By default, the **Secure transfer required** property is enabled when you create a storage account.
-Connecting to an Azure file share over SMB without encryption fails when secure transfer is required for the storage account. Examples of insecure connections include those made over SMB 2.1 or SMB 3.x without encryption.
+Azure Policy provides a built-in policy to ensure that secure transfer is required for your storage accounts. For more information, see the **Storage** section in [Azure Policy built-in policy definitions](../../governance/policy/samples/built-in-policies.md#storage).
-By default, the **Secure transfer required** property is enabled when you create a storage account.
+Connecting to an Azure file share over SMB without encryption fails when secure transfer is required for the storage account. Examples of insecure connections include those made over SMB 2.1 or SMB 3.x without encryption.
> [!NOTE] > Because Azure Storage doesn't support HTTPS for custom domain names, this option is not applied when you're using a custom domain name. And classic storage accounts are not supported.
synapse-analytics Quickstart Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-create-workspace-cli.md
In this quickstart, you learn to create a Synapse workspace by using the Azure C
- [Azure Data Lake Storage Gen2 storage account](../storage/common/storage-account-create.md) > [!IMPORTANT]
- > The Azure Synapse workspace needs to be able to read and write to the selected ADLS Gen2 account. In addition, for any storage account that you link as the primary storage account, you must have enabled **hierarchical namespace** at the creation of the storage account, as described on the [Create a Storage Accout](../storage/common/storage-account-create.md?tabs=azure-portal#create-a-storage-account) page.
+ > The Azure Synapse workspace needs to be able to read and write to the selected ADLS Gen2 account. In addition, for any storage account that you link as the primary storage account, you must have enabled **hierarchical namespace** at the creation of the storage account, as described on the [Create a Storage Account](../storage/common/storage-account-create.md?tabs=azure-portal#create-a-storage-account) page.
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
az synapse workspace delete --name $SynapseWorkspaceName --resource-group $Synap
## Next steps
-Next, you can [create SQL pools](quickstart-create-sql-pool-studio.md) or [create Apache Spark pools](quickstart-create-apache-spark-pool-studio.md) to start analyzing and exploring your data.
+Next, you can [create SQL pools](quickstart-create-sql-pool-studio.md) or [create Apache Spark pools](quickstart-create-apache-spark-pool-studio.md) to start analyzing and exploring your data.
virtual-machines Nd Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nd-series.md
Nvidia NVLink Interconnect: Not Supported<br>
|||||||||| | Standard_ND6s | 6 | 112 | 736 | 1 | 24 | 12 | 20000/200 | 4 | | Standard_ND12s | 12 | 224 | 1474 | 2 | 48 | 24 | 40000/400 | 8 |
-| Standard_ND24s | 24 | 448 | 2948 | 4 | 24 | 32 | 80000/800 | 8 |
+| Standard_ND24s | 24 | 448 | 2948 | 4 | 96 | 32 | 80000/800 | 8 |
| Standard_ND24rs* | 24 | 448 | 2948 | 4 | 96 | 32 | 80000/800 | 8 | 1 GPU = one P40 card.
web-application-firewall Custom Waf Rules Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/ag/custom-waf-rules-overview.md
Custom rules allow you to create your own rules that are evaluated for each requ
For example, you can block all requests from an IP address in the range 192.168.5.4/24. In this rule, the operator is *IPMatch*, the matchValues is the IP address range (192.168.5.4/24), and the action is to block the traffic. You also set the rule's name and priority.
-Custom rules support using compounding logic to make more advanced rules that address your security needs. For example, (Condition 1 **and** Condition 2) **or** Condition 3). This means that if Condition 1 **and** Condition 2 are met, **or** if Condition 3 is met, the WAF should take the action specified in the custom rule.
+Custom rules support using compounding logic to make more advanced rules that address your security needs. For example, ((Condition 1 **and** Condition 2) **or** Condition 3). This means that if Condition 1 **and** Condition 2 are met, **or** if Condition 3 is met, the WAF should take the action specified in the custom rule.
Different matching conditions within the same rule are always compounded using **and**. For example, block traffic from a specific IP address, and only if they're using a certain browser.