Updates from: 05/05/2022 01:08:46
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 12/09/2021 Last updated : 04/30/2022
The following table summarizes the Security Assertion Markup Language (SAML) app
| - | :--: | -- | | [MFA using time-based one-time password (TOTP) with authenticator apps](multi-factor-authentication.md#verification-methods) | Preview | Users can use any authenticator app that supports TOTP verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app).| | [Phone factor authentication](phone-factor-technical-profile.md) | GA | |
-| [Azure AD MFA authentication](multi-factor-auth-technical-profile.md) | Preview | |
+| [Azure AD MFA authentication](multi-factor-auth-technical-profile.md) | GA | |
| [One-time password](one-time-password-technical-profile.md) | GA | | | [Azure Active Directory](active-directory-technical-profile.md) as local directory | GA | | | [Predicate validations](predicates.md) | GA | For example, password complexity. |
The following table summarizes the Security Assertion Markup Language (SAML) app
| Feature | Custom policy | Notes | | - | :--: | -- | | Azure portal | GA | |
-| [Application Insights user journey logs](troubleshoot-with-application-insights.md) | Preview | Used for troubleshooting during development. |
-| [Application Insights event logs](analytics-with-application-insights.md) | Preview | Used to monitor user flows in production. |
+| [Application Insights user journey logs](troubleshoot-with-application-insights.md) | GA | Used for troubleshooting during development. |
+| [Application Insights event logs](analytics-with-application-insights.md) | GA | Used to monitor user flows in production. |
## Responsibilities of custom policy feature-set developers
active-directory-b2c Deploy Custom Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/deploy-custom-policies-devops.md
Previously updated : 03/25/2022 Last updated : 04/30/2022
active-directory-b2c Multi Factor Auth Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-auth-technical-profile.md
Previously updated : 12/09/2021 Last updated : 04/30/2022
Azure Active Directory B2C (Azure AD B2C) provides support for verifying a phone number by using a verification code, or verifying a Time-based One-time Password (TOTP) code. - ## Protocol The **Name** attribute of the **Protocol** element needs to be set to `Proprietary`. The **handler** attribute must contain the fully qualified name of the protocol handler assembly that is used by Azure AD B2C:
The following example shows an Azure AD MFA technical profile used to verify the
In this mode, the user is required to install any authenticator app that supports time-based one-time password (TOTP) verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app), on a device that they own.
-During the first sign-up or sign-in, the user scans a QR code, opens a deep link, or enters the code manually using the authenticator app. To verify the TOTP code, use the [Begin verify OTP](#begin-verify-totp) followed by [Verify TOTP](#verify-totp) validation technical profiles.
+During the first sign up or sign in, the user scans a QR code, opens a deep link, or enters the code manually using the authenticator app. To verify the TOTP code, use the [Begin verify OTP](#begin-verify-totp) followed by [Verify TOTP](#verify-totp) validation technical profiles.
-For subsequent sign-ins, use the [Get available devices](#get-available-devices) method to check if the user has already enrolled their device. If the number of available devices is greater than zero, this indicates the user has enrolled before. In this case, the user needs to type the TOTP code that appears in the authenticator app.
+For subsequent sign ins, use the [Get available devices](#get-available-devices) method to check if the user has already enrolled their device. If the number of available devices is greater than zero, this indicates the user has enrolled before. In this case, the user needs to type the TOTP code that appears in the authenticator app.
The technical profile:
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 02/17/2022 Last updated : 04/30/2022
In a self-asserted technical profile, you can use the **InputClaims** and **Inpu
## Display claims
-The display claims feature is currently in **preview**.
- The **DisplayClaims** element contains a list of claims to be presented on the screen for collecting data from the user. To prepopulate the values of display claims, use the input claims that were previously described. The element may also contain a default value. The order of the claims in **DisplayClaims** specifies the order in which Azure AD B2C renders the claims on the screen. To force the user to provide a value for a specific claim, set the **Required** attribute of the **DisplayClaim** element to `true`.
Use output claims when:
- **Claims are output by output claims transformation**. - **Setting a default value in an output claim** without collecting data from the user or returning the data from the validation technical profile. The `LocalAccountSignUpWithLogonEmail` self-asserted technical profile sets the **executed-SelfAsserted-Input** claim to `true`. - **A validation technical profile returns the output claims** - Your technical profile may call a validation technical profile that returns some claims. You may want to bubble up the claims and return them to the next orchestration steps in the user journey. For example, when signing in with a local account, the self-asserted technical profile named `SelfAsserted-LocalAccountSignin-Email` calls the validation technical profile named `login-NonInteractive`. This technical profile validates the user credentials and also returns the user profile. Such as 'userPrincipalName', 'displayName', 'givenName' and 'surName'.-- **A display control returns the output claims** - Your technical profile may have a reference to a [display control](display-controls.md). The display control returns some claims, such as the verified email address. You may want to bubble up the claims and return them to the next orchestration steps in the user journey. The display control feature is currently in **preview**.
+- **A display control returns the output claims** - Your technical profile may have a reference to a [display control](display-controls.md). The display control returns some claims, such as the verified email address. You may want to bubble up the claims and return them to the next orchestration steps in the user journey.
The following example demonstrates the use of a self-asserted technical profile that uses both display claims and output claims.
active-directory-b2c String Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/string-transformations.md
Returns a string array that contains the substrings in this instance that are de
| InputParameter | delimiter | string | The string to use as a separator, such as comma `,`. | | OutputClaim | outputClaim | stringCollection | A string collection whose elements contain the substrings in this string that are delimited by the `delimiter` input parameter. |
+> [!NOTE]
+> Any existing elements in the `OutputClaim` stringCollection will be removed.
+ ### Example of StringSplit The following example takes a comma delimiter string of user roles, and converts it to a string collection.
active-directory-b2c Technicalprofiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technicalprofiles.md
Previously updated : 11/30/2021 Last updated : 04/30/2022
The **TechnicalProfile** element contains the following elements:
| InputClaimsTransformations | 0:1 | A list of previously defined references to claims transformations that should be executed before any claims are sent to the claims provider or the relying party. | | InputClaims | 0:1 | A list of previously defined references to claim types that are taken as input in the technical profile. | | PersistedClaims | 0:1 | A list of previously defined references to claim types that will be persisted by the technical profile. |
-| DisplayClaims | 0:1 | A list of previously defined references to claim types that are presented by the [self-asserted technical profile](self-asserted-technical-profile.md). The DisplayClaims feature is currently in preview. |
+| DisplayClaims | 0:1 | A list of previously defined references to claim types that are presented by the [self-asserted technical profile](self-asserted-technical-profile.md). |
| OutputClaims | 0:1 | A list of previously defined references to claim types that are taken as output in the technical profile. | | OutputClaimsTransformations | 0:1 | A list of previously defined references to claims transformations that should be executed after the claims are received from the claims provider. | | ValidationTechnicalProfiles | 0:n | A list of references to other technical profiles that the technical profile uses for validation purposes. For more information, see [Validation technical profile](validation-technical-profile.md).|
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 04/04/2022 Last updated : 05/04/2022
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## April 2022
+
+### New articles
+
+- [Configure Asignio with Azure Active Directory B2C for multifactor authentication](partner-asignio.md)
+- [Set up sign up and sign in with Mobile ID using Azure Active Directory B2C](identity-provider-mobile-id.md)
+- [Find help and open a support ticket for Azure Active Directory B2C](find-help-open-support-ticket.md)
+
+### Updated articles
+
+- [Configure authentication in a sample single-page application by using Azure AD B2C](configure-authentication-sample-spa-app.md)
+- [Configure xID with Azure Active Directory B2C for passwordless authentication](partner-xid.md)
+- [Azure Active Directory B2C service limits and restrictions](service-limits.md)
+- [Localization string IDs](localization-string-ids.md)
+- [Manage your Azure Active Directory B2C tenant](tenant-management.md)
+- [Page layout versions](page-layout.md)
+- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md)
+- [Azure Active Directory B2C: What's new](whats-new-docs.md)
+- [Application types that can be used in Active Directory B2C](application-types.md)
+- [Publish your Azure Active Directory B2C app to the Azure Active Directory app gallery](publish-app-to-azure-ad-app-gallery.md)
+- [Quickstart: Set up sign in for a desktop app using Azure Active Directory B2C](quickstart-native-app-desktop.md)
+- [Register a single-page application (SPA) in Azure Active Directory B2C](tutorial-register-spa.md)
+ ## March 2022 ### New articles
active-directory Application Proxy Integrate With Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-traffic-manager.md
+
+ Title: Add your own Traffic Manager to Application Proxy
+description: Learn how to combine the Application Proxy service with a Traffic Manager solution.
++++++ Last updated : 05/02/2022+++
+# Add your own Traffic Manager to Application Proxy
+
+This article explains how to configure Azure Active Directory (Azure AD) Application Proxy to work with Traffic Manager. With the Application Proxy geo-routing feature, you can optimize which region of the Application Proxy service your connector groups use. You can now combine this functionality with a Traffic Manager solution of your choice. This combination enables a fully dynamic geo-aware solution based on your user location. It unlocks the rich rule set of your preferred Traffic Manager to prioritize how traffic is routed to your apps protected by Application Proxy. With this combination, users can use a single URL to access the instance of the app closest to them.
++
+## Prerequisites
+
+- A Traffic Manager solution.
+- Apps that exist in different regions. Geo-routing is enabled per connector group co-located with the app.
+- A custom domain to use for each app.
+
+## Application Proxy configuration
+
+To use Traffic Manager, you must configure Application Proxy. The configuration steps that follow refer to the following URL definitions:
+
+- Regional URL: The Application Proxy endpoints for each app. For example, nam.contoso.com and india.contoso.com.
+- Alternate URL: The URL configured for the Traffic Manager. For example, contoso.com.
+
+Follow these steps to configure Application Proxy for Traffic
+
+1. Install connectors for each location your app instances will be in. For each connector group, use the geo-routing feature to assign the connectors to their respective regions.
+
+1. Set up your app instances with Application Proxy as follows:
+ 1. For each app, upload a custom domain. Include the alternate URL to use for the apps as a SAN URL to the uploaded certificate.
+ 1. Assign each app to its respective connector group.
+ 1. If you prefer the alternate URL to be maintained throughout the user session, register each app and add the URL as a reply URL. This step is optional.
+
+1. In the Traffic Manager solution, add the Application Proxy regional URLs that were created for each app as an endpoint.
+
+1. Configure the Traffic Manager's load balancing rules with a standard SKU.
+
+1. To give your Traffic Manager a user-friendly URL, create a CNAME record that points the alternate URL to the Traffic Manager's endpoint.
+
+1. With the `alternateUrl` property, configure the alternate URL on the [onPremisesPublishing resource type](/graph/api/resources/onpremisespublishing) of the app.
+
+1. If you want the alternate URL to be maintained throughout the user session, call `onPremisesPublishing` and set the `useAlternateUrlForTranslationAndRedirect` flag to `true`.
+
+## Sample Application Proxy configuration
+
+The following table shows a sample Application Proxy configuration. This sample uses the sample app domain www\.contoso.com as the alternate URL.
+
+| | North America-based app | India-based app | Additional Information |
+|- | -- | | - |
+| **Internal URL** | contoso.com | contoso.com | If the apps are hosted in different regions, you can use the same internal URL for each app. |
+| **External URL** | nam.contoso.com | india.contoso.com | Configure a custom domain for each app.|
+| **Custom domain certificate** | DNS: nam.contoso.com SAN: www\.contoso.com | DNS: nam.contoso.com SAN: www\.contoso.com | In the certificate you upload for each app, set the SAN value to the alternate URL. The alternate URL is the URL all users use to reach the app.|
+| **Connector group** | NAM Geo Group | India Geo Group | Ensure you assign each app to the correct connector group by using the geo-routing functionality. |
+| **Redirects** | (Optional) To maintain redirects for the alternate URL, add the application registration for the app. | (Optional) To maintain redirects for the alternate URL, add the application registration for the app. | This step is required if the alternate URL (www\.contoso.com) is to be maintained for all redirections. |
+| **Reply URL** | www\.contoso.com.| www\.contoso.com. |
+
+## Traffic manager configuration
+
+Follow these steps to configure the Traffic
+
+1. Create a Traffic Manager profile with your preferred routing rules.
+
+1. In the Traffic Manager, add the NAM endpoint: nam.contoso.com.
+
+1. Add the India endpoint: india.contoso.com.
+
+1. Add the app proxy endpoints.
+
+1. Add a CNAME record to point www\.contoso.com to the Traffic Manager's URL. For example, contoso.trafficmanager.net.
+
+ The alternate URL now points to the Traffic Manager.
+
+## Next steps
+
+[Publish applications on separate networks and locations using connector groups](application-proxy-connector-groups.md)
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-cloud-sync.md
# What is Azure AD Connect cloud sync?
-Azure AD Connect cloud sync is new offering from Microsoft designed to meet and accomplish your hybrid identity goals for synchronization of users, groups, and contacts to Azure AD. It accomplishes this by using the Azure AD cloud provisioning agent instead of the Azure AD Connect application. However, it can be used alongside Azure AD Connect sync and it provides the following benefits:
+Azure AD Connect cloud sync is a new offering from Microsoft designed to meet and accomplish your hybrid identity goals for synchronization of users, groups, and contacts to Azure AD. It accomplishes this by using the Azure AD cloud provisioning agent instead of the Azure AD Connect application. However, it can be used alongside Azure AD Connect sync and it provides the following benefits:
- Support for synchronizing to an Azure AD tenant from a multi-forest disconnected Active Directory forest environment: The common scenarios include merger & acquisition (where the acquired company's AD forests are isolated from the parent company's AD forests), and companies that have historically had multiple AD forests. - Simplified installation with light-weight provisioning agents: The agents act as a bridge from AD to Azure AD, with all the sync configuration managed in the cloud.
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md
The PRT is issued during user authentication on a Windows 10 or newer device in
In Azure AD registered device scenarios, the Azure AD WAM plugin is the primary authority for the PRT since Windows logon is not happening with this Azure AD account. > [!NOTE]
-> 3rd party identity providers need to support the WS-Trust protocol to enable PRT issuance on Windows 10 or newer devices. Without WS-Trust, PRT cannot be issued to users on Hybrid Azure AD joined or Azure AD joined devices. On ADFS only usernamemixed endpoints are required. Both adfs/services/trust/2005/windowstransport and adfs/services/trust/13/windowstransport should be enabled as intranet facing endpoints only and **must NOT be exposed** as extranet facing endpoints through the Web Application Proxy
+> 3rd party identity providers need to support the WS-Trust protocol to enable PRT issuance on Windows 10 or newer devices. Without WS-Trust, PRT cannot be issued to users on Hybrid Azure AD joined or Azure AD joined devices. On ADFS only usernamemixed endpoints are required. Both adfs/services/trust/2005/windowstransport and adfs/services/trust/13/windowstransport should be enabled as intranet facing endpoints only and **must NOT be exposed** as extranet facing endpoints through the Web Application Proxy.
> [!NOTE]
-> Azure AD Conditional Access policies are not evaluated when PRTs are issued
+> Azure AD Conditional Access policies are not evaluated when PRTs are issued.
+
+> [!NOTE]
+> We do not support 3rd party credential providers for issuance and renewal of Azure AD PRTs.
## What is the lifetime of a PRT?
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
The following are the user properties that you can use to create a single expres
| mailNickName |Any string value (mail alias of the user) |(user.mailNickName -eq "value") | | mobile |Any string value or *null* |(user.mobile -eq "value") | | objectId |GUID of the user object |(user.objectId -eq "11111111-1111-1111-1111-111111111111") |
+| onPremisesDistinguishedName (preview)| Any string value or *null* |(user.onPremisesDistinguishedName -eq "value") |
| onPremisesSecurityIdentifier | On-premises security identifier (SID) for users who were synchronized from on-premises to the cloud. |(user.onPremisesSecurityIdentifier -eq "S-1-1-11-1111111111-1111111111-1111111111-1111111") | | passwordPolicies |None DisableStrongPassword DisablePasswordExpiration DisablePasswordExpiration, DisableStrongPassword |(user.passwordPolicies -eq "DisableStrongPassword") | | physicalDeliveryOfficeName |Any string value or *null* |(user.physicalDeliveryOfficeName -eq "value") |
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
Azure AD Connect was released several years ago. Since this time, several of th
-To address this, we have bundled as many of these newer components into a new, single release, so you only have to update once. This release is Azure AD Connect V2. This release is a new version of the same software used to accomplish your hybrid identity goals, built using the latest foundational components.
+To address this, we've bundled as many of these newer components into a new, single release, so you only have to update once. This release is Azure AD Connect V2. This release is a new version of the same software used to accomplish your hybrid identity goals, built using the latest foundational components.
## What are the major changes?
The previous versions of Azure AD Connect shipped with the ADAL authentication l
### Visual C++ Redist 14
-SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we are updating the C++ runtime library to use this version. This redistributable will be installed with the Azure AD Connect V2 package, so you do not have to take any action for the C++ runtime update.
+SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we're updating the C++ runtime library to use this version. This Redistributable will be installed with the Azure AD Connect V2 package, so you don't have to take any action for the C++ runtime update.
### TLS 1.2 TLS1.0 and TLS 1.1 are protocols that are deemed unsafe and are being deprecated by Microsoft. This release of Azure AD Connect will only support TLS 1.2.
-All versions of Windows Server that are supported for Azure AD Connect V2 already default to TLS 1.2. If your server does not support TLS 1.2 you will need to enable this before you can deploy Azure AD Connect V2. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
+All versions of Windows Server that are supported for Azure AD Connect V2 already default to TLS 1.2. If your server doesn't support TLS 1.2 you will need to enable this before you can deploy Azure AD Connect V2. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
### All binaries signed with SHA2
-We noticed that some components had SHA1 signed binaries. We no longer support SHA1 for downloadable binaries and we upgraded all binaries to SHA2 signing. The digital signatures are used to ensure that the updates come directly from Microsoft and were not tampered with during delivery. Because of weaknesses in the SHA-1 algorithm and to align to industry standards, we have changed the signing of Windows updates to use the more secure SHA-2 algorithm."ΓÇ»
+We noticed that some components had SHA1 signed binaries. We no longer support SHA1 for downloadable binaries and we upgraded all binaries to SHA2 signing. The digital signatures are used to ensure that the updates come directly from Microsoft and were not tampered with during delivery. Because of weaknesses in the SHA-1 algorithm and to align to industry standards, we've changed the signing of Windows updates to use the more secure SHA-2 algorithm."ΓÇ»
There is no action needed from your side. ### Windows Server 2012 and Windows Server 2012 R2 are no longer supported
-SQL Server 2019 requires Windows Server 2016 or newer as a server operating system. Since AAD Connect v2 contains SQL Server 2019 components, we no longer can support older Windows Server versions.
+SQL Server 2019 requires Windows Server 2016 or newer as a server operating system. Since Azure AD Connect v2 contains SQL Server 2019 components, we no longer can support older Windows Server versions.
-You cannot install this version on an older Windows Server version. We suggest you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
+You can't install this version on an older Windows Server version. We suggest you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
This [article](/windows-server/get-started-19/install-upgrade-migrate-19) describes the upgrade from older Windows Server versions to Windows Server 2019.
This release of Azure AD Connect contains several cmdlets that require PowerShel
More details about PowerShell prerequisites can be found [here](/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements#windows-powershell-50). >[!NOTE]
- >PowerShell 5 is already part of Windows Server 2016 so you probably do not have to take action as long as you are on a recent Window Server version.
+ >PowerShell 5 is already part of Windows Server 2016 so you probably don't have to take action as long as you're on a recent Window Server version.
## What else do I need to know?
More details about PowerShell prerequisites can be found [here](/powershell/scri
**Why is this upgrade important for me?** </br> Next year several of the components in your current Azure AD Connect server installations will go out of support. If you are using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. So we recommend all customers to upgrade to this newer version as soon as they can.
-This upgrade is especially important since we have had to update our prerequisites for Azure AD Connect and you may need additional time to plan and update your servers to the newer versions of these prerequisites
+This upgrade is especially important since we've had to update our prerequisites for Azure AD Connect and you may need additional time to plan and update your servers to the newer versions of these prerequisites
**Is there any new functionality I need to know about?** </br>
-No ΓÇô the V2.0 release does not contain any new functionality. This release only contains updates of some of the foundational components on Azure AD Connect. However, later releases of Azure AD Connect V2 may contain new functionality.
+No ΓÇô the V2.0 release doesn't contain any new functionality. This release only contains updates of some of the foundational components on Azure AD Connect. However, later releases of Azure AD Connect V2 may contain new functionality.
**Can I upgrade from any previous version to V2?** </br> Yes ΓÇô upgrades from any previous version of Azure AD Connect to Azure AD Connect V2 is supported. Please follow the guidance in [this article](how-to-upgrade-previous-version.md) to determine what is the best upgrade strategy for you.
Yes ΓÇô upgrades from any previous version of Azure AD Connect to Azure AD Conne
Yes, you can do that, and it is a great way to migrate to Azure AD Connect V2 ΓÇô especially if you are also upgrading to a new operating system version. You can read more about the Import/export configuration feature and how you can use it in this [article](how-to-connect-import-export-config.md). **I have enabled auto upgrade for Azure AD Connect ΓÇô will I get this new version automatically?** </br>
-Yes - your Azure AD Connect server will be upgraded to the latest release if you have enabled the auto-upgrade feature. Note that we have no yet release an auto upgrade version for Azure AD Connect.
+Yes - your Azure AD Connect server will be upgraded to the latest release if you have enabled the auto-upgrade feature. Note that we've no yet release an auto upgrade version for Azure AD Connect.
**I am not ready to upgrade yet ΓÇô how much time do I have?** </br> You should upgrade to Azure AD Connect V2 as soon as you can. **__All Azure AD Connect V1 versions will be retired on 31 August, 2022.__** For the time being we will continue to support older versions of Azure AD Connect, but it may prove difficult to provide a good support experience if some of the components in Azure AD Connect have dropped out of support. This upgrade is particularly important for ADAL and TLS1.0/1.1 as these services might stop working unexpectedly after they are deprecated.
-**I use an external SQL database and do not use SQL 2012 LocalDb ΓÇô do I still have to upgrade?** </br>
-Yes, you still need to upgrade to remain in a supported state even if you do not use SQL Server 2012, due to the TLS1.0/1.1 and ADAL deprecation. Note that SQL Server 2012 can still be used as an external SQL database with Azure AD Connect V2. The SQL 2019 drivers in Azure AD Connect V2 are compatible with SQL Server 2012.
+**I use an external SQL database and don't use SQL 2012 LocalDb ΓÇô do I still have to upgrade?** </br>
+Yes, you still need to upgrade to remain in a supported state even if you don't use SQL Server 2012, due to the TLS1.0/1.1 and ADAL deprecation. Note that SQL Server 2012 can still be used as an external SQL database with Azure AD Connect V2. The SQL 2019 drivers in Azure AD Connect V2 are compatible with SQL Server 2012.
**After the upgrade of my Azure AD Connect instance to V2, will the SQL 2012 components automatically get uninstalled?** </br>
-No, the upgrade to SQL 2019 does not remove any SQL 2012 components from your server. If you no longer need these components then you should follow [the SQL Server uninstallation instructions](/sql/sql-server/install/uninstall-an-existing-instance-of-sql-server-setup).
+No, the upgrade to SQL 2019 doesn't remove any SQL 2012 components from your server. If you no longer need these components then you should follow [the SQL Server uninstallation instructions](/sql/sql-server/install/uninstall-an-existing-instance-of-sql-server-setup).
-**What happens if I do not upgrade?** </br>
+**What happens if I don't upgrade?** </br>
Until one of the components that are being retired are actually deprecated, you will not see any impact. Azure AD Connect will keep on working.
-We expect TLS 1.0/1.1 to be deprecated in 2022, and you need to make sure you are not using these protocols by that date as your service may stop working unexpectedly. You can manually configure your server for TLS 1.2 though, and that does not require an update of Azure AD Connect to V2
+We expect TLS 1.0/1.1 to be deprecated in 2022, and you need to make sure you aren't using these protocols by that date as your service may stop working unexpectedly. You can manually configure your server for TLS 1.2 though, and that doesn't require an update of Azure AD Connect to V2
-In June 2022, ADAL is planned to go out of support. When ADAL goes out of support, authentication may stop working unexpectedly, and this will block the Azure AD Connect server from working properly. We strongly advise you to upgrade to Azure AD Connect V2 before June 2022. You cannot upgrade to a supported authentication library with your current Azure AD Connect version.
+After December 2022, ADAL is planned to go out of support. When ADAL goes out of support, authentication may stop working unexpectedly, and this will block the Azure AD Connect server from working properly. We strongly advise you to upgrade to Azure AD Connect V2 before December 2022. You can't upgrade to a supported authentication library with your current Azure AD Connect version.
-**After upgrading to 2 the ADSync PowerShell cmdlets do not work?** </br>
+**After upgrading to 2 the ADSync PowerShell cmdlets don't work?** </br>
This is a known issue. Restart your PowerShell session after installing or upgrading to version 2 and then reimport the module. Use the following instructions to import the module. 1. Open Windows PowerShell with administrative privileges.
This is a known issue. Restart your PowerShell session after installing or upgra
- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Express settings](how-to-connect-install-express.md)-- [Customized settings](how-to-connect-install-custom.md)
+- [Customized settings](how-to-connect-install-custom.md)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Previously updated : 04/27/2022 Last updated : 05/04/2022
This article lists the latest features, improvements, and changes in the Azure Active Directory (Azure AD) Verifiable Credentials service.
+## May
+
+We are expanding our service to all Azure AD customers! Verifiable credentials are now available to everyone with an Azure AD subscription (Free and Premium). Existing tenants that configured the Verifiable Credentials service prior to May 4, 2022 must make a [small change](verifiable-credentials-faq.md#updating-the-vc-service-configuration) to avoid service disruptions.
+ ## April Starting next month, we are rolling out exciting changes to the subscription requirements for the Verifiable Credentials service. Administrators must perform a small configuration change before **May 4, 2022** to avoid service disruptions. Follow [these steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to apply the required configuration changes.
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
NODEPOOL_ID=$(az aks nodepool show --name nodepool1 --cluster-name myAKSCluster
> [!IMPORTANT] > Your AKS node pool must be created or upgraded after Nov 10th, 2021 in order for a snapshot to be taken from it.
->Starting April, 2022 the CLI-preview extension commands for node pool snapshot has changed. In preview CLI please use az aks nodepool snapshot commands, refer [CLI Node Pool Snapshot][az-aks-nodepool-snapshot].
+> If you are using the `aks-preview` Azure CLI extension version `0.5.59` or newer, the commands for node pool snapshot have changed. For updated commands, see the [Node Pool Snapshot CLI reference][az-aks-nodepool-snapshot].
Now, to take a snapshot from the previous node pool you'll use the `az aks snapshot` CLI command.
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
The following limitations apply when you integrate KMS etcd encryption with AKS:
* KMS etcd encryption does not work with System-Assigned Managed Identity. The keyvault access-policy is required to be set before the feature is enabled. In addition, System-Assigned Managed Identity is not available until cluster creation, thus there is a cycle dependency. * Using Azure Key Vault with PrivateLink enabled. * Using more than 2000 secrets in a cluster.
-* Managed HSM Support
* Bring your own (BYO) Azure Key Vault from another tenant.
api-management Api Management Howto Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-oauth2.md
Title: Authorize developer accounts using OAuth 2.0 in API Management
+ Title: Authorize test console of API Management developer portal using OAuth 2.0 user authorization
-description: Learn how to authorize users using OAuth 2.0 in API Management. OAuth 2.0 secures the API so that users can only access resources to which they're entitled.
+description: Learn how to set up OAuth 2.0 user authorization for the interactive test console in the Azure API Management developer portal. This article shows an example using Azure Active Directory as an OAuth 2.0 provider.
documentationcenter: '' Previously updated : 11/16/2021 Last updated : 04/26/2022
-# How to authorize developer accounts using OAuth 2.0 in Azure API Management
+# How to authorize test console of developer portal by configuring OAuth 2.0 user authorization
-Many APIs support [OAuth 2.0](https://oauth.net/2/) to secure the API and ensure that only valid users have access, and they can only access resources to which they're entitled. To use Azure API Management's interactive developer console with such APIs, the service allows you to configure your service instance to work with your OAuth 2.0 enabled API.
-
-Configuring OAuth 2.0 user authorization in the test console of the developer portal provides developers with a convenient way to acquire an OAuth 2.0 access token. From the test console, the token is simply passed to the backend with the API call. Token validation must be configured separately - either using a [JWT validation policy](api-management-access-restriction-policies.md#ValidateJWT), or in the backend service.
+Many APIs support [OAuth 2.0](https://oauth.net/2/) to secure the API and ensure that only valid users have access, and they can only access resources to which they're entitled. To use Azure API Management's interactive developer console with such APIs, the service allows you to configure an external provider for OAuth 2.0 user authorization.
+Configuring OAuth 2.0 user authorization in the test console of the developer portal provides developers with a convenient way to acquire an OAuth 2.0 access token. From the test console, the token is then passed to the backend with the API call. Token validation must be configured separately - either using a [JWT validation policy](api-management-access-restriction-policies.md#ValidateJWT), or in the backend service.
## Prerequisites
-This guide shows you how to configure your API Management service instance to use OAuth 2.0 authorization for developer accounts, but does not show you how to configure an OAuth 2.0 provider.
+This article shows you how to configure your API Management service instance to use OAuth 2.0 authorization in the developer portal's test console, but doesn't show you how to configure an OAuth 2.0 provider.
-The configuration for each OAuth 2.0 provider is different, although the steps are similar, and the required pieces of information used to configure OAuth 2.0 in your API Management service instance are the same. This topic shows examples using Azure Active Directory as an OAuth 2.0 provider.
+If you haven't yet created an API Management service instance, see [Create an API Management service instance][Create an API Management service instance].
-If you have not yet created an API Management service instance, see [Create an API Management service instance][Create an API Management service instance].
-> [!NOTE]
-> For more information on configuring OAuth 2.0 using Azure Active Directory, see [Protect a web API backend in Azure API Management using OAuth 2.0 authorization with Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
+## Scenario overview
+Configuring OAuth 2.0 user authorization in API Management only enables the developer portalΓÇÖs test console as a client to acquire a token from the authorization server. The configuration for each OAuth 2.0 provider is different, although the steps are similar, and the required pieces of information used to configure OAuth 2.0 in your API Management service instance are the same. This article shows an example using Azure Active Directory as an OAuth 2.0 provider.
++
+1. Register an application (backend-app) in Azure AD to represent the API.
+
+1. Register another application (client-app) in Azure AD to represent a client application that needs to call the API - in this case, the test console of the developer portal.
+
+ In Azure AD, grant permissions to allow the client-app to call the backend-app.
+
+1. Configure the test console in the developer portal to call an API using OAuth 2.0 user authorization.
+
+1. Configure an API to use OAuth 2.0 user authorization.
+
+1. Add the **validate-jwt** policy to pre-authorize the OAuth 2.0 token for every incoming request.
## Authorization grant types
The following is a high level summary. For more information about grant types, s
|Grant type |Description |Scenarios | |||| |Authorization code | Exchanges authorization code for token | Server-side apps such as web apps |
-|Implicit | Returns access token immediately without an extra authorization code exchange step | Clients that can't protect a secret or token such as mobile apps and single-page apps<br/><br/>Generally not recommended because of inherent risks of returning access token in HTTP redirect without confirmation that it is received by client |
+|Implicit | Returns access token immediately without an extra authorization code exchange step | Clients that can't protect a secret or token such as mobile apps and single-page apps<br/><br/>Generally not recommended because of inherent risks of returning access token in HTTP redirect without confirmation that it's received by client |
|Resource owner password | Requests user credentials (username and password), typically using an interactive form | For use with highly trusted applications<br/><br/>Should only be used when other, more secure flows can't be used |
-|Client credentials | Authenticates and authorizes an app rather than a user | Machine-to-machine applications that do not require a specific user's permissions to access data, such as CLIs, daemons, or services running on your backend |
+|Client credentials | Authenticates and authorizes an app rather than a user | Machine-to-machine applications that don't require a specific user's permissions to access data, such as CLIs, daemons, or services running on your backend |
### Security considerations
When configuring OAuth 2.0 user authorization in the test console of the develop
Depending on your scenarios, you may configure more or less restrictive token scopes for other client applications that you create to access backend APIs. * **Take extra care if you enable the Client Credentials flow**. The test console in the developer portal, when working with the Client Credentials flow, doesn't ask for credentials. An access token could be inadvertently exposed to developers or anonymous users of the developer console.
+## Register applications with the OAuth server
+
+You'll need to register two applications with your OAuth 2.0 provider: one represents the backend API to be protected, and a second represents the client application that calls the API - in this case, the test console of the developer portal.
+
+The following are example steps using Azure AD as the OAuth 2.0 provider.
+
+### Register an application in Azure AD to represent the API
+
+Using the Azure portal, register an application that represents the backend API in Azure AD.
+
+For details about app registration, see [Quickstart: Configure an application to expose a web API](../active-directory/develop/quickstart-configure-app-expose-web-apis.md).
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **App registrations**.
+
+1. Select **New registration**.
+
+1. When the **Register an application page** appears, enter your application's registration information:
+
+ - In the **Name** section, enter a meaningful application name that will be displayed to users of the app, such as *backend-app*.
+ - In the **Supported account types** section, select an option that suits your scenario.
+
+1. Leave the [**Redirect URI**](../active-directory/develop/reply-url.md) section empty. Later, you'll add a redirect URI generated in the OAuth 2.0 configuration in API Management.
+
+1. Select **Register** to create the application.
+
+1. On the app **Overview** page, find the **Application (client) ID** value and record it for later.
+
+1. Under the **Manage** section of the side menu, select **Expose an API** and set the **Application ID URI** with the default value. Record this value for later.
+
+1. Select the **Add a scope** button to display the **Add a scope** page:
+ 1. Enter a new **Scope name**, **Admin consent display name**, and **Admin consent description**.
+ 1. Make sure the **Enabled** scope state is selected.
+
+1. Select the **Add scope** button to create the scope.
+
+1. Repeat the previous two steps to add all scopes supported by your API.
+
+1. Once the scopes are created, make a note of them for use in a subsequent step.
+
+### Register another application in Azure AD to represent a client application
+
+Register every client application that calls the API as an application in Azure AD. In this example, the client application is the **test console** in the API Management developer portal.
+
+To register an application in Azure AD to represent the client application:
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **App registrations**.
+
+1. Select **New registration**.
+
+1. When the **Register an application page** appears, enter your application's registration information:
+
+ - In the **Name** section, enter a meaningful application name that will be displayed to users of the app, such as *client-app*.
+ - In the **Supported account types** section, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
+
+1. In the **Redirect URI** section, select `Web` and leave the URL field empty for now.
+
+1. Select **Register** to create the application.
+
+1. On the app **Overview** page, find the **Application (client) ID** value and record it for later.
+
+1. Create a client secret for this application to use in a subsequent step.
+
+ 1. Under the **Manage** section of the side menu, select **Certificates & secrets**.
+ 1. Under **Client secrets**, select **New client secret**.
+ 1. Under **Add a client secret**, provide a **Description** and choose when the key should expire.
+ 1. Select **Add**.
+
+When the secret is created, note the key value for use in a subsequent step. You can't access the secret again in the portal.
+
+### Grant permissions in Azure AD
+
+Now that you've registered two applications to represent the API and the test console, grant permissions to allow the client-app to call the backend-app.
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **App registrations**.
+
+1. Choose your client app. Then in the side menu, select **API permissions**.
+
+1. Select **+ Add a Permission**.
+
+1. Under **Select an API**, select **My APIs**, and then find and select your backend-app.
+
+1. Select **Delegated Permissions**, then select the appropriate permissions to your backend-app.
+
+1. Select **Add permissions**.
+
+Optionally:
+1. Navigate to your client-app's **API permissions** page.
+
+1. Select **Grant admin consent for \<your-tenant-name>** to grant consent on behalf of all users in this directory.
+ ## Configure an OAuth 2.0 authorization server in API Management 1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. 1. Under the Developer portal section in the side menu, select **OAuth 2.0 + OpenID Connect**.
-1. Under the **OAuth 2.0 tab**, select **+Add**.
+1. Under the **OAuth 2.0 tab**, select **+ Add**.
:::image type="content" source="media/api-management-howto-oauth2/oauth-01.png" alt-text="OAuth 2.0 menu":::
When configuring OAuth 2.0 user authorization in the test console of the develop
1. Enter the **Client registration page URL** - for example, `https://contoso.com/login`. This page is where users can create and manage their accounts, if your OAuth 2.0 provider supports user management of accounts. The page varies depending on the OAuth 2.0 provider used.
- If your OAuth 2.0 provider does not have user management of accounts configured, enter a placeholder URL here such as the URL of your company, or a URL such as `https://placeholder.contoso.com`.
+ If your OAuth 2.0 provider doesn't have user management of accounts configured, enter a placeholder URL here such as the URL of your company, or a URL such as `http://localhost`.
:::image type="content" source="media/api-management-howto-oauth2/oauth-02.png" alt-text="OAuth 2.0 new server"::: 1. The next section of the form contains the **Authorization grant types**, **Authorization endpoint URL**, and **Authorization request method** settings.
- * Specify the **Authorization grant types** by checking the desired types. **Authorization code** is specified by default. [Learn more](#authorization-grant-types).
+ * Select one or more desired **Authorization grant types**. For this example, select **Authorization code** (the default). [Learn more](#authorization-grant-types).
+
+ * Enter the **Authorization endpoint URL**. For Azure AD, this URL will be similar to one of the following URLs, where `<tenant_id>` is replaced with the ID of your Azure AD tenant. You can obtain the endpoint URL from the **Endpoints** page of one of your app registrations.
- * Enter the **Authorization endpoint URL**. For Azure Active Directory, this URL will be similar to the following URL, where `<tenant_id>` is replaced with the ID of your Azure AD tenant.
+ Using the v2 endpoint is recommended; however, API Management supports both v1 and v2 endpoints.
- `https://login.microsoftonline.com/<tenant_id>/oauth2/authorize`
+ `https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/authorize` (v2)
- * The **Authorization request method** specifies how the authorization request is sent to the OAuth 2.0 server. By default **GET** is selected.
+ `https://login.microsoftonline.com/<tenant_id>/oauth2/authorize` (v1)
+
+ * The **Authorization request method** specifies how the authorization request is sent to the OAuth 2.0 server. Select **POST**.
:::image type="content" source="media/api-management-howto-oauth2/oauth-03.png" alt-text="Specify authorization settings"::: 1. Specify **Token endpoint URL**, **Client authentication methods**, **Access token sending method** and **Default scope**.
- * For an Azure Active Directory OAuth 2.0 server, the **Token endpoint URL** has the following format, where `<TenantID>` has the format of `yourapp.onmicrosoft.com`.
+ * Enter the **Token endpoint URL**. For Azure AD, it will be similar to one of the following URLs, where `<tenant_id>` is replaced with the ID of your Azure AD tenant. Use the same endpoint version (v2 or v1) that you chose previously.
+
+ `https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/token` (v2)
- `https://login.microsoftonline.com/<TenantID>/oauth2/token`
+ `https://login.microsoftonline.com/<tenant_id>/oauth2/token` (v1)
- * The default setting for **Client authentication methods** is **In the body**, and **Access token sending method** is **Authorization header**. These values are configured on this section of the form, along with the **Default scope**.
+ * If you use **v1** endpoints, add a body parameter:
+ * Name: **resource**.
+ * Value: the back-end app **Application (client) ID**.
+ * If you use **v2** endpoints:
+ * Enter the back-end app scope you created in the **Default scope** field.
+ * Set the value for the [`accessTokenAcceptedVersion`](../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute) property to `2` in the [application manifest](../active-directory/develop/reference-app-manifest.md) for both the backend-app and the client-app registrations.
-6. The **Client credentials** section contains the **Client ID** and **Client secret**, which are obtained during the creation and configuration process of your OAuth 2.0 server.
+ * Accept the default settings for **Client authentication methods** and **Access token sending method**.
+
+1. The **Client credentials** section contains the **Client ID** and **Client secret**, which you obtained during the creation and configuration process of your client-app.
- After the **Client ID** and **Client secret** are specified, the **redirect_uri** for the **authorization code** is generated. This URI is used to configure the reply URL in your OAuth 2.0 server configuration.
+1. After the **Client ID** and **Client secret** are specified, the **Redirect URI** for the **authorization code** is generated. This URI is used to configure the redirect URI in your OAuth 2.0 server configuration.
In the developer portal, the URI suffix is of the form: - `/signin-oauth/code/callback/{authServerName}` for authorization code grant flow - `/signin-oauth/implicit/callback` for implicit grant flow
+
+ Copy the appropriate Redirect URI to the **Authentication** page of your client-app registration.
:::image type="content" source="media/api-management-howto-oauth2/oauth-04.png" alt-text="Add client credentials for the OAuth 2.0 service":::
When configuring OAuth 2.0 user authorization in the test console of the develop
1. Select **Create** to save the API Management OAuth 2.0 authorization server configuration.
-After the server configuration is saved, you can configure APIs to use this configuration, as shown in the next section.
+1. [Republish](api-management-howto-developer-portal-customize.md#publish) the developer portal.
+
+After saving the OAuth 2.0 server configuration, configure APIs to use this configuration, as shown in the next section.
## Configure an API to use OAuth 2.0 user authorization
After the server configuration is saved, you can configure APIs to use this conf
[!INCLUDE [api-management-portal-legacy.md](../../includes/api-management-portal-legacy.md)]
-Once you have configured your OAuth 2.0 authorization server and configured your API to use that server, you can test it by going to the Developer Portal and calling an API. Click **Developer portal (legacy)** in the top menu from your Azure API Management instance **Overview** page.
+Once you've configured your OAuth 2.0 authorization server and configured your API to use that server, you can test it by going to the developer portal and calling an API. Click **Developer portal (legacy)** in the top menu from your Azure API Management instance **Overview** page.
Click **APIs** in the top menu and select **Echo API**.
Select the **GET Resource** operation, click **Open Console**, and then select *
![Open console][api-management-open-console]
-When **Authorization code** is selected, a pop-up window is displayed with the sign-in form of the OAuth 2.0 provider. In this example the sign-in form is provided by Azure Active Directory.
+When **Authorization code** is selected, a pop-up window is displayed with the sign-in form of the OAuth 2.0 provider. In this example, the sign-in form is provided by Azure Active Directory.
> [!NOTE] > If you have pop-ups disabled, you'll be prompted to enable them by the browser. After you enable them, select **Authorization code** again and the sign-in form will be displayed. ![Sign in][api-management-oauth2-signin]
-Once you have signed in, the **Request headers** are populated with an `Authorization : Bearer` header that authorizes the request.
+Once you've signed in, the **Request headers** are populated with an `Authorization : Bearer` header that authorizes the request.
![Request header token][api-management-request-header-token] At this point you can configure the desired values for the remaining parameters, and submit the request.
+## Configure a JWT validation policy to pre-authorize requests
+
+In the preceding section, API Management doesn't validate the access token. It only passes the token in the authorization header to the backend API.
+
+To pre-authorize requests, configure a [validate-jwt](api-management-access-restriction-policies.md#ValidateJWT) policy to validate the access token of each incoming request. If a request doesn't have a valid token, API Management blocks it.
++ ## Next steps
-For more information about using OAuth 2.0 and API Management, see the following video and accompanying [article](api-management-howto-protect-backend-with-aad.md).
+For more information about using OAuth 2.0 and API Management, see [Protect a web API backend in Azure API Management using OAuth 2.0 authorization with Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
+ [api-management-oauth2-signin]: ./media/api-management-howto-oauth2/api-management-oauth2-signin.png [api-management-request-header-token]: ./media/api-management-howto-oauth2/api-management-request-header-token.png
For more information about using OAuth 2.0 and API Management, see the following
[Configure an OAuth 2.0 authorization server in API Management]: #step1 [Configure an API to use OAuth 2.0 user authorization]: #step2 [Test the OAuth 2.0 user authorization in the Developer Portal]: #step3
-[Next steps]: #next-steps
+[Next steps]: #next-steps
api-management Api Management Howto Protect Backend With Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-protect-backend-with-aad.md
Title: Protect API backend in API Management using OAuth 2.0 and Azure Active Directory
+ Title: Protect API in API Management using OAuth 2.0 and Azure Active Directory
-description: Learn how to secure user access to a web API backend in Azure API Management and the developer portal with OAuth 2.0 user authorization and Azure Active Directory.
+description: Learn how to secure user access to an API in Azure API Management with OAuth 2.0 user authorization and Azure Active Directory.
Previously updated : 09/17/2021 Last updated : 04/27/2022
-# Protect a web API backend in Azure API Management using OAuth 2.0 authorization with Azure Active Directory
+# Protect an API in Azure API Management using OAuth 2.0 authorization with Azure Active Directory
-In this article, you'll learn how to configure your [Azure API Management](api-management-key-concepts.md) instance to protect an API, by using the [OAuth 2.0 protocol with Azure Active Directory (Azure AD)](../active-directory/develop/active-directory-v2-protocols.md).
-
-You can configure authorization for developer accounts using other OAuth 2.0 providers. For more information, see [How to authorize developer accounts using OAuth 2.0 in Azure API Management](api-management-howto-oauth2.md).
-
-> [!NOTE]
-> This feature is available in the **Developer**, **Basic**, **Standard**, and **Premium** tiers of API Management.
->
-> You can follow every step below in the **Consumption** tier, except for calling the API from the developer portal.
+In this article, you'll learn high level steps to configure your [Azure API Management](api-management-key-concepts.md) instance to protect an API, by using the [OAuth 2.0 protocol with Azure Active Directory (Azure AD)](../active-directory/develop/active-directory-v2-protocols.md).
## Prerequisites
Prior to following the steps in this article, you must have:
## Overview
+Follow these steps to protect an API in API Management, using OAuth 2.0 authorization with Azure AD.
-1. Register an application (backend-app) in Azure AD to represent the API.
+1. Register an application (called *backend-app* in this article) in Azure AD to protect access to the API.
-1. Register another application (client-app) in Azure AD to represent a client application that needs to call the API.
+ To access the API, users or applications will acquire and present a valid OAuth token granting access to this app with each API request.
-1. In Azure AD, grant permissions to allow the client-app to call the backend-app.
+1. Configure the [validate-jwt](api-management-access-restriction-policies.md#ValidateJWT) policy in API Management to validate the OAuth token presented in each incoming API request. Valid requests can be passed to the API.
-1. Configure the developer console in the developer portal to call the API using OAuth 2.0 user authorization.
+Details about OAuth authorization flows and how to generate the required OAuth tokens are beyond the scope of this article. Typically, a separate client app is used to acquire tokens from Azure AD that authorize access to the API. For links to more information, see the [Next steps](#next-steps).
-1. Add the **validate-jwt** policy to validate the OAuth token for every incoming request.
+## Register an application in Azure AD to represent the API
-## 1. Register an application in Azure AD to represent the API
-
-Using the Azure portal, protect an API with Azure AD by registering an application that represents the API in Azure AD.
+Using the Azure portal, protect an API with Azure AD by first registering an application that represents the API.
For details about app registration, see [Quickstart: Configure an application to expose a web API](../active-directory/develop/quickstart-configure-app-expose-web-apis.md).
For details about app registration, see [Quickstart: Configure an application to
1. On the app **Overview** page, find the **Application (client) ID** value and record it for later.
-1. Under the **Manage** section of the side menu, select **Expose an API** and set the **Application ID URI** with the default value. Record this value for later.
+1. Under the **Manage** section of the side menu, select **Expose an API** and set the **Application ID URI** with the default value. If you're developing a separate client app to obtain OAuth 2.0 tokens for access to the backend-app, record this value for later.
1. Select the **Add a scope** button to display the **Add a scope** page: 1. Enter a new **Scope name**, **Admin consent display name**, and **Admin consent description**.
For details about app registration, see [Quickstart: Configure an application to
1. Select the **Add scope** button to create the scope.
-1. Repeat steps 8 and 9 to add all scopes supported by your API.
-
-1. Once the scopes are created, make a note of them for use in a subsequent step.
-
-## 2. Register another application in Azure AD to represent a client application
-
-Register every client application that calls the API as an application in Azure AD. In this example, the client application is the **developer console** in the API Management developer portal.
-
-To register another application in Azure AD to represent the Developer Console:
-
-1. In the [Azure portal](https://portal.azure.com), search for and select **App registrations**.
-
-1. Select **New registration**.
-
-1. When the **Register an application page** appears, enter your application's registration information:
-
- - In the **Name** section, enter a meaningful application name that will be displayed to users of the app, such as *client-app*.
- - In the **Supported account types** section, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
-
-1. In the **Redirect URI** section, select `Web` and leave the URL field empty for now.
-
-1. Select **Register** to create the application.
-
-1. On the app **Overview** page, find the **Application (client) ID** value and record it for later.
-
-1. Create a client secret for this application to use in a subsequent step.
-
- 1. Under the **Manage** section of the side menu, select **Certificates & secrets**.
- 1. Under **Client secrets**, select **New client secret**.
- 1. Under **Add a client secret**, provide a **Description** and choose when the key should expire.
- 1. Select **Add**.
-
-When the secret is created, note the key value for use in a subsequent step.
-
-## 3. Grant permissions in Azure AD
-
-Now that you have registered two applications to represent the API and the Developer Console, grant permissions to allow the client-app to call the backend-app.
-
-1. In the [Azure portal](https://portal.azure.com), search for and select **App registrations**.
-
-1. Choose your client app. Then in the list of pages for the app, select **API permissions**.
-
-1. Select **Add a Permission**.
-
-1. Under **Select an API**, select **My APIs**, and then find and select your backend-app.
-
-1. Select **Delegated Permissions**, then select the appropriate permissions to your backend-app.
-
-1. Select **Add permissions**.
-
-Optionally:
-1. Navigate to your client app's **API permissions** page.
-
-1. Select **Grant admin consent for \<your-tenant-name>** to grant consent on behalf of all users in this directory.
+1. Repeat the previous two steps to add all scopes supported by your API.
-## 4. Enable OAuth 2.0 user authorization in the Developer Console
+1. Once the scopes are created, make a note of them for use later.
-At this point, you have created your applications in Azure AD, and have granted proper permissions to allow the client-app to call the backend-app.
+## Configure a JWT validation policy to pre-authorize requests
-In this example, you enable OAuth 2.0 user authorization in the developer console (the client app).
-1. In the Azure portal, find the **Authorization endpoint URL** and **Token endpoint URL** and save them for later.
- 1. Open the **App registrations** page.
- 1. Select **Endpoints**.
- 1. Copy the **OAuth 2.0 Authorization Endpoint** and the **OAuth 2.0 Token Endpoint**.
+## Authorization workflow
-1. Browse to your API Management instance.
+1. A user or application acquires a token from Azure AD with permissions that grant access to the backend-app.
-1. Under the **Developer portal** section in the side menu, select **OAuth 2.0 + OpenID Connect**.
+1. The token is added in the Authorization header of API requests to API Management.
-1. Under the **OAuth 2.0** tab, select **Add**.
+1. API Management validates the token by using the `validate-jwt` policy.
-1. Provide a **Display name** and **Description**.
+ * If a request doesn't have a valid token, API Management blocks it.
-1. For the **Client registration page URL**, enter a placeholder value, such as `http://localhost`.
- * The **Client registration page URL** points to a page where users create and configure their own accounts supported by OAuth 2.0 providers.
- * We use a placeholder, since, in this example, users do not create and configure their own accounts.
+ * If a request is accompanied by a valid token, the gateway can forward the request to the API.
-1. For **Authorization grant types**, select **Authorization code**.
-
-1. Specify the **Authorization endpoint URL** and **Token endpoint URL** you saved earlier:
- 1. Copy and paste the **OAuth 2.0 Authorization Endpoint** into the **Authorization endpoint URL** text box.
- 1. Select **POST** under Authorization request method.
- 1. Enter the **OAuth 2.0 Token Endpoint**, and paste it into the **Token endpoint URL** text box.
- * If you use the **v1** endpoint:
- * Add a body parameter named **resource**.
- * Enter the back-end app **Application ID** for the value.
- * If you use the **v2** endpoint:
- * Use the back-end app scope you created in the **Default scope** field.
- * Set the value for the [`accessTokenAcceptedVersion`](../active-directory/develop/reference-app-manifest.md#accesstokenacceptedversion-attribute) property to `2` in your [application manifest](../active-directory/develop/reference-app-manifest.md).
-
-
- >[!IMPORTANT]
- > While you can use either **v1** or **v2** endpoints, we recommend using v2 endpoints.
-
-1. Specify the client app credentials:
- * For **Client ID**, use the **Application ID** of the client-app.
- * For **Client secret**, use the key you created for the client-app earlier.
-
-1. Make note of the **Redirect URI** for the authorization code grant type.
-
-1. Select **Create**.
-
-1. Return to your client-app registration.
-
-1. Under **Manage**, select **Authentication**.
-
-1. Under **Platform configurations**:
- * Click on **Add a platform**.
- * Select the type as **Web**.
- * Paste the redirect URI you saved earlier under **Redirect URIs**.
- * Click on **Configure** button to save.
-
- Now that the developer console can obtain access tokens from Azure AD via your OAuth 2.0 authorization server, enable OAuth 2.0 user authorization for your API. This enables the developer console to know that it needs to obtain an access token on behalf of the user, before making calls to your API.
-
-1. Browse to your API Management instance, and go to **APIs**.
-
-1. Select the API you want to protect. For example, `Echo API`.
-
-1. Go to **Settings**.
-
-1. Under **Security**:
- 1. Choose **OAuth 2.0**.
- 1. Select the OAuth 2.0 server you configured earlier.
-
-1. Select **Save**.
-
-> [!NOTE]
-> To see the latest configuration of your portal, publish the portal. You can publish the portal in the portal's administrative interface or from the Azure portal.
--
-## 5. Successfully call the API from the developer portal
-
-> [!NOTE]
-> This section does not apply to the **Consumption** tier, as it does not support the developer portal.
--
-## 6. Configure a JWT validation policy to pre-authorize requests
-
-So far:
-* You've tried to make a call from the developer console.
-* You've been prompted and have signed into the Azure AD tenant.
-* The developer console obtains an access token on your behalf, and includes the token in the request made to the API.
-
-However, what if someone calls your API without a token or with an invalid token? For example, if you call the API without the `Authorization` header, the call will still go through, since API Management does not validate the access token. It simply passes the `Authorization` header to the back-end API.
-
-Pre-authorize requests in API Management with the [Validate JWT](./api-management-access-restriction-policies.md#ValidateJWT) policy, by validating the access tokens of each incoming request. If a request does not have a valid token, API Management blocks it.
-
-The following example policy, when added to the `<inbound>` policy section, checks the value of the audience claim in an access token obtained from Azure AD, and returns an error message if the token is not valid.
--
-```xml
-<validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
- <openid-config url="https://login.microsoftonline.com/{aad-tenant}/v2.0/.well-known/openid-configuration" />
- <required-claims>
- <claim name="aud">
- <value>{backend-api-application-client-id}</value>
- </claim>
- </required-claims>
-</validate-jwt>
-```
-
-> [!NOTE]
-> The above `openid-config` URL corresponds to the v2 endpoint. For the v1 `openid-config`endpoint, use `https://login.microsoftonline.com/{aad-tenant}/.well-known/openid-configuration`.
-
-> [!TIP]
-> Find the **{aad-tenant}** value as your Azure AD tenant ID in the Azure portal, either on:
-> * The overview page of your Azure AD resource, or
-> * The **Manage > Properties** page of your Azure AD resource.
+## Next steps
-For information on how to configure policies, see [Set or edit policies](./set-edit-policies.md).
+* To learn more about how to build an application and implement OAuth 2.0, see [Azure AD code samples](../active-directory/develop/sample-v2-code.md).
-## Build an application to call the API
+* For an end-to-end example of configuring OAuth 2.0 user authorization in the API Management developer portal, see [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md).
-In this guide, you used the developer console in API Management as the sample client application to call the `Echo API` protected by OAuth 2.0. To learn more about how to build an application and implement OAuth 2.0, see [Azure AD code samples](../active-directory/develop/sample-v2-code.md).
+- Learn more about [Azure AD and OAuth2.0](../active-directory/develop/authentication-vs-authorization.md).
-## Next steps
+- For other ways to secure your back-end service, see [Mutual certificate authentication](./api-management-howto-mutual-certificates.md).
-- Learn more about [Azure AD and OAuth2.0](../active-directory/develop/authentication-vs-authorization.md).-- Check out more [videos](https://azure.microsoft.com/documentation/videos/index/?services=api-management) about API Management.-- For other ways to secure your back-end service, see [Mutual Certificate authentication](./api-management-howto-mutual-certificates.md).-- [Create an API Management service instance](./get-started-create-service-instance.md).-- [Manage your first API](./import-and-publish.md).
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
Title: Protect SPA backend in Azure API Management with Active Directory B2C
-description: Protect an API with OAuth 2.0 by using Azure Active Directory B2C, Azure API Management and Easy Auth to be called from a JavaScript SPA using the PKCE enabled SPA Auth Flow.
+ Title: Protect APIs in Azure API Management with Active Directory B2C
+description: Protect a serverless API with OAuth 2.0 by using Azure Active Directory B2C, Azure API Management, and Easy Auth to be called from a JavaScript SPA using the PKCE enabled SPA Auth Flow.
documentationcenter: ''
-# Protect SPA backend with OAuth 2.0, Azure Active Directory B2C and Azure API Management
+# Protect serverless APIs with Azure API Management and Azure AD B2C for consumption from a SPA
This scenario shows you how to configure your Azure API Management instance to protect an API. We'll use the Azure AD B2C SPA (Auth Code + PKCE) flow to acquire a token, alongside API Management to secure an Azure Functions backend using EasyAuth.
Here's an illustration of the components in use and the flow between them once t
Here's a quick overview of the steps: 1. Create the Azure AD B2C Calling (Frontend, API Management) and API Applications with scopes and grant API Access
-1. Create the sign up and sign in policies to allow users to sign in with Azure AD B2C
+1. Create the sign-up and sign-in policies to allow users to sign in with Azure AD B2C
1. Configure API Management with the new Azure AD B2C Client IDs and keys to Enable OAuth2 user authorization in the Developer Console 1. Build the Function API 1. Configure the Function API to enable EasyAuth with the new Azure AD B2C Client IDΓÇÖs and Keys and lock down to APIM VIP
Open the Azure AD B2C blade in the portal and do the following steps.
1. Select the **App Registrations** tab 1. Click the 'New Registration' button. 1. Choose 'Single Page Application (SPA)' from the Redirect URI selection box.
-1. Now set the Display Name and AppID URI, choose something unique and relevant to the Frontend application that will use this AAD B2C app registration. In this example, you can use "Frontend Application"
+1. Now set the Display Name and AppID URI, choose something unique and relevant to the Frontend application that will use this Azure Active Directory B2C app registration. In this example, you can use "Frontend Application"
1. As per the first app registration, leave the supported account types selection to default (authenticating users with user flows) 1. Use placeholders for the reply urls, like 'https://jwt.ms' (A Microsoft owned token decoding site), weΓÇÖll update those urls later. 1. Leave the grant admin consent box ticked
Open the Azure AD B2C blade in the portal and do the following steps.
1. Click 'Grant admin consent for {tenant} and click 'Yes' from the popup dialog. This popup consents the "Frontend Application" to use the permission "hello" defined in the "Backend Application" created earlier. 1. All Permissions should now show for the app as a green tick under the status column
-## Create a "Sign up and Sign in" user flow
+## Create a "Sign-up and Sign-in" user flow
1. Return to the root of the B2C blade by selecting the Azure AD B2C breadcrumb. 1. Switch to the 'User Flows' (Under Policies) tab. 1. Click "New user flow"
-1. Choose the 'Sign up and sign in' user flow type, and select 'Recommended' and then 'Create'
+1. Choose the 'Sign-up and sign-in' user flow type, and select 'Recommended' and then 'Create'
1. Give the policy a name and record it for later. For this example, you can use "Frontendapp_signupandsignin", note that this will be prefixed with "B2C_1_" to make "B2C_1_Frontendapp_signupandsignin"
-1. Under 'Identity providers' and "Local accounts", check 'Email sign up' (or 'User ID sign up' depending on the config of your B2C tenant) and click OK. This configuration is because we'll be registering local B2C accounts, not deferring to another identity provider (like a social identity provider) to use an user's existing social media account.
+1. Under 'Identity providers' and "Local accounts", check 'Email sign up' (or 'User ID sign up' depending on the config of your B2C tenant) and click OK. This configuration is because we'll be registering local B2C accounts, not deferring to another identity provider (like a social identity provider) to use a user's existing social media account.
1. Leave the MFA and conditional access settings at their defaults. 1. Under 'User Attributes and claims', click 'Show More...' then choose the claim options that you want your users to enter and have returned in the token. Check at least 'Display Name' and 'Email Address' to collect, with 'Display Name' and 'Email Addresses' to return (pay careful attention to the fact that you are collecting emailaddress, singular, and asking to return email addresses, multiple), and click 'OK', then click 'Create'. 1. Click on the user flow that you created in the list, then click the 'Run user flow' button.
Open the Azure AD B2C blade in the portal and do the following steps.
> [!NOTE] > B2C Policies allow you to expose the Azure AD B2C login endpoints to be able to capture different data components and sign in users in different ways. >
- > In this case we configured a sign up or sign in flow (policy). This also exposed a well-known configuration endpoint, in both cases our created policy was identified in the URL by the "p=" query string parameter.
+ > In this case we configured a sign-up or sign in flow (policy). This also exposed a well-known configuration endpoint, in both cases our created policy was identified in the URL by the "p=" query string parameter.
> > Once this is done, you now have a functional Business to Consumer identity platform that will sign users into multiple applications.
Open the Azure AD B2C blade in the portal and do the following steps.
1. Under 'Authentication Providers', choose ΓÇÿAzure Active DirectoryΓÇÖ. 1. Choose ΓÇÿAdvancedΓÇÖ from the Management Mode switch. 1. Paste the Backend application's [Application] Client ID (from Azure AD B2C) into the ΓÇÿClient IDΓÇÖ box
-1. Paste the Well-known open-id configuration endpoint from the sign up and sign in policy into the Issuer URL box (we recorded this configuration earlier).
+1. Paste the Well-known open-id configuration endpoint from the sign-up and sign-in policy into the Issuer URL box (we recorded this configuration earlier).
1. Click 'Show Secret' and paste the Backend application's client secret into the appropriate box. 1. Select OK, which takes you back to the identity provider selection blade/screen. 1. Leave [Token Store](../app-service/overview-authentication-authorization.md#token-store) enabled under advanced settings (default).
Open the Azure AD B2C blade in the portal and do the following steps.
> [!IMPORTANT] > Now your Function API is deployed and should throw 401 responses if the correct JWT is not supplied as an Authorization: Bearer header, and should return data when a valid request is presented.
- > You added additional defense-in-depth security in EasyAuth by configuring the 'Login With Azure AD' option to handle unauthenticated requests. Be aware that this will change the unauthorized request behavior between the Backend Function App and Frontend SPA as EasyAuth will issue a 302 redirect to AAD instead of a 401 Not Authorized response, we will correct this by using API Management later.
+ > You added additional defense-in-depth security in EasyAuth by configuring the 'Login With Azure AD' option to handle unauthenticated requests. Be aware that this will change the unauthorized request behavior between the Backend Function App and Frontend SPA as EasyAuth will issue a 302 redirect to Azure Active Directory instead of a 401 Not Authorized response, we will correct this by using API Management later.
> > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management. >
You'll need to add CIDR formatted blocks of addresses to the IP restrictions pan
var config = { msal: { auth: {
- clientId: "{CLIENTID}", // This is the client ID of your FRONTEND application that you registered with the SPA type in AAD B2C
+ clientId: "{CLIENTID}", // This is the client ID of your FRONTEND application that you registered with the SPA type in Azure Active Directory B2C
authority: "{YOURAUTHORITYB2C}", // Formatted as https://{b2ctenantname}.b2clogin.com/tfp/{b2ctenantguid or full tenant name including onmicrosoft.com}/{signuporinpolicyname} redirectUri: "{StoragePrimaryEndpoint}", // The storage hosting address of the SPA, a web-enabled v2 storage account - recorded earlier as the Primary Endpoint. knownAuthorities: ["{B2CTENANTDOMAIN}"] // {b2ctenantname}.b2clogin.com
Now we have a simple app with a simple secured API, let's test it.
## Test the client application 1. Open the sample app URL that you noted down from the storage account you created earlier.
-1. Click ΓÇ£Sign InΓÇ¥ in the top-right-hand corner, this click will pop up your Azure AD B2C sign up or sign in profile.
+1. Click ΓÇ£Sign InΓÇ¥ in the top-right-hand corner, this click will pop up your Azure AD B2C sign-up or sign-in profile.
1. The app should welcome you by your B2C profile name. 1. Now Click "Call API" and the page should update with the values sent back from your secured API. 1. If you *repeatedly* click the Call API button and you're running in the developer tier or above of API Management, you should note that your solution will begin to rate limit the API and this feature should be reported in the app with an appropriate message.
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 3/15/2022 Last updated : 5/4/2022 # Migrate to App Service Environment v3
To clone an app using the [Azure portal](https://www.portal.azure.com), navigate
## Manually create your apps on an App Service Environment v3
-If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. At this time, all deployment methods except FTP are supported on App Service Environment v3. You don't need to make updates when you deploy your apps to your new environment unless you want to make changes or take advantage of App Service Environment v3's dedicated features.
+If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment unless you want to make changes or take advantage of App Service Environment v3's dedicated features.
You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, head over to your App Service and go to **Export template** under **Automation**.
Once your migration and any testing with your new environment is complete, delet
> [Integrate your ILB App Service Environment with the Azure Application Gateway](integrate-with-application-gateway.md) > [!div class="nextstepaction"]
-> [Migrate to App Service Environment v3 by using the migration feature](migrate.md)
+> [Migrate to App Service Environment v3 by using the migration feature](migrate.md)
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md
If your extension was in the stable version and auto-upgrade-minor-version is se
### Application services extension v 0.13.0 (April 2022)
+- Added support for Azure Functions v4 and introduces support for PowerShell functions
- Added support for Application Insights codeless integration for Node JS applications - Added support for [Access Restrictions](app-service-ip-restrictions.md) via CLI - More details provided when extension fails to install, to assist with troubleshooting issues
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
# Form Recognizer read model
-The Form Recognizer v3.0 preview includes the new Read API. Read extracts printed and handwritten from documents. The read model can detect lines, words, locations, and languages and is the core of all the other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the read model as a foundation for extracting texts from documents.
+The Form Recognizer v3.0 preview includes the new Read OCR model. Form Recognizer Read builds on the success of COmputer Vision Read and optimizes even more for analyzing documents, including new document formats in the future. It extracts printed and handwritten text from documents and images and can handle mixed languages in the documents and text line. The read model can detect lines, words, locations, and additionally detect languages. It is the foundational technology powering the text extraction in Form Recognizer Layout, prebuilt, general document, and custom models.
## Development options
Form Recognizer preview version supports several languages for the read model. *
### Text lines and words
-Read API extracts text from documents and images with multiple text angles and colors. It accepts photos of documents, faxes, printed and/or handwritten (English only) text, and mixed modes. Text is extracted from data provided in lines, words, bounding boxes, confidence scores, and style.
+Read API extracts text from documents and images. It accepts PDFs and images of documents and handles printed and/or handwritten text, and supports mixed languages. Text is extracted as text lnes, words, bounding boxes, confidence scores, and style, whether handwritten or not, supported for Latin languages only.
-### Language detection (v3.0 preview)
+### Language detection
-Read API in v3.0 preview 2 adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the language at the text line level along with the confidence score.
+Read adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the language at the text line level along with the confidence score.
### Handwritten classification for text lines (Latin only)
azure-app-configuration Enable Dynamic Configuration Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-app.md
ms.devlang: java Previously updated : 12/09/2020 Last updated : 05/02/2022
# Tutorial: Use dynamic configuration in a Java Spring app
-App Configuration has two libraries for Spring. `azure-spring-cloud-appconfiguration-config` requires Spring Boot and takes a dependency on `spring-cloud-context`. `azure-spring-cloud-appconfiguration-config-web` requires Spring Web along with Spring Boot. Both libraries support manual triggering to check for refreshed configuration values. `azure-spring-cloud-appconfiguration-config-web` also adds support for automatic checking of configuration refresh.
+App Configuration has two libraries for Spring.
-Refresh allows you to refresh your configuration values without having to restart your application, though it will cause all beans in the `@RefreshScope` to be recreated. The client library caches a hash ID of the currently loaded configurations to avoid too many calls to the configuration store. The refresh operation doesn't update the value until the cached value has expired, even when the value has changed in the configuration store. The default expiration time for each request is 30 seconds. It can be overridden if necessary.
+* `azure-spring-cloud-appconfiguration-config` requires Spring Boot and takes a dependency on `spring-cloud-context`.
+* `azure-spring-cloud-appconfiguration-config-web` requires Spring Web along with Spring Boot, and also adds support for automatic checking of configuration refresh.
-`azure-spring-cloud-appconfiguration-config-web`'s automated refresh is triggered based off activity, specifically Spring Web's `ServletRequestHandledEvent`. If a `ServletRequestHandledEvent` is not triggered, `azure-spring-cloud-appconfiguration-config-web`'s automated refresh will not trigger a refresh even if the cache expiration time has expired.
+Both libraries support manual triggering to check for refreshed configuration values.
+
+Refresh allows you to update your configuration values without having to restart your application, though it will cause all beans in the `@RefreshScope` to be recreated. It checks for any changes to configured triggers, including metadata. By default, the minimum amount of time between checks for changes, refresh interval, is set to 30 seconds.
+
+`azure-spring-cloud-appconfiguration-config-web`'s automated refresh is triggered based on activity, specifically Spring Web's `ServletRequestHandledEvent`. If a `ServletRequestHandledEvent` is not triggered, `azure-spring-cloud-appconfiguration-config-web`'s automated refresh will not trigger a refresh even if the cache expiration time has expired.
## Use manual refresh
+To use manual refresh, start with a Spring Boot app that uses App Configuration, such as the app you create by following the [Spring Boot quickstart for App Configuration](quickstart-java-spring-app.md).
+ App Configuration exposes `AppConfigurationRefresh` which can be used to check if the cache is expired and if it is expired trigger a refresh.
-```java
-import com.azure.spring.cloud.config.AppConfigurationRefresh;
+1. Update HelloController to use `AppConfigurationRefresh`.
+
+ ```java
+ import com.azure.spring.cloud.config.AppConfigurationRefresh;
+
+ ...
+
+ import com.azure.spring.cloud.config.AppConfigurationRefresh;
+
+ @RestController
+ public class HelloController {
+ private final MessageProperties properties;
+
+ @Autowired(required = false)
+ private AppConfigurationRefresh refresh;
+
+ public HelloController(MessageProperties properties) {
+ this.properties = properties;
+ }
+
+ @GetMapping
+ public String getMessage() throws InterruptedException, ExecutionException {
+ if (refresh != null) {
+ refresh.refreshConfigurations();
+ }
+ return "Message: " + properties.getMessage();
+ }
+ }
+ ```
-...
+ `AppConfigurationRefresh`'s `refreshConfigurations()` returns a `Future` that is true if a refresh has been triggered, and false if not. False means either the cache expiration time hasn't expired, there was no change, or another thread is currently checking for a refresh.
-@Autowired
-private AppConfigurationRefresh appConfigurationRefresh;
+1. Update `bootstrap.properties` to enable refresh
-...
+ ```properties
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.enabled=true
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.refresh-interval= 30s
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.triggers[0].key=sentinel
+ ```
-public void myConfigurationRefreshCheck() {
- Future<Boolean> triggeredRefresh = appConfigurationRefresh.refreshConfigurations();
-}
-```
+1. Open the **Azure Portal** and navigate to your App Configuration resource associated with your application. Select **Configuration Explorer** under **Operations** and create a new key-value pair by selecting **+ Create** > **Key-value** to add the following parameters:
+
+ | Key | Value |
+ |||
+ | sentinel | 1 |
+
+ Leave **Label** and **Content Type** empty for now.
+
+1. Select **Apply**.
+
+1. Build your Spring Boot application with Maven and run it.
+
+ ```shell
+ mvn clean package
+ mvn spring-boot:run
+ ```
+
+1. Open a browser window, and go to the URL: `http://localhost:8080`. You will see the message associated with your key.
-`AppConfigurationRefresh`'s `refreshConfigurations()` returns a `Future` that is true if a refresh has been triggered, and false if not. False means either the cache expiration time hasn't expired, there was no change, or another thread is currently checking for a refresh.
+ You can also use *curl* to test your application, for example:
+
+ ```cmd
+ curl -X GET http://localhost:8080/
+ ```
+
+1. To test dynamic configuration, open the Azure App Configuration portal associated with your application. Select **Configuration Explorer**, and update the value of your displayed key, for example:
+
+ | Key | Value |
+ |||
+ | /application/config.message | Hello - Updated |
+
+1. Update the sentinel key you created earlier to a new value. This change will trigger the application to refresh all configuration keys once the refresh interval has passed.
+
+ | Key | Value |
+ |||
+ | sentinel | 2 |
+
+1. Refresh the browser page twice to see the new message displayed. The first time triggers the refresh, the second loads the changes.
+
+> [!NOTE]
+> The library only checks for changes on the after the refresh interval has passed, if the period hasn't passed then no change will be seen, you will have to wait for the period to pass then trigger the refresh check.
## Use automated refresh
Then, open the *pom.xml* file in a text editor and add a `<dependency>` for `azu
<dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
- <version>2.0.0</version>
+ <version>2.6.0</version>
</dependency> ```
Then, open the *pom.xml* file in a text editor and add a `<dependency>` for `azu
```properties spring.cloud.azure.appconfiguration.stores[0].monitoring.enabled=true
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.refresh-interval= 30s
spring.cloud.azure.appconfiguration.stores[0].monitoring.triggers[0].key=sentinel ```
Then, open the *pom.xml* file in a text editor and add a `<dependency>` for `azu
||| | sentinel | 2 |
-1. Refresh the browser page to see the new message displayed.
+1. Refresh the browser page twice to see the new message displayed. The first time triggers the refresh, the second loads the changes, as the first request returns using the original scope.
+
+> [!NOTE]
+> The library only checks for changes on after the refresh interval has passed. If the refresh interval hasn't passed then it will not check for changes, you will have to wait for the interval to pass then trigger the refresh check.
## Next steps
-In this tutorial, you enabled your Spring Boot app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial.
+In this tutorial, you enabled your Spring Boot app to dynamically refresh configuration settings from App Configuration. For further questions see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2180917), it has all of the details on how the Spring Cloud Azure App Configuration library works. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial.
> [!div class="nextstepaction"] > [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
ms.devlang: java Previously updated : 04/05/2021 Last updated : 05/02/2022 #Customer intent: I want to use push refresh to dynamically update my app to use the latest configuration data in App Configuration.
In this tutorial, you learn how to:
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 8.
+- A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 11.
- [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above. - An existing Azure App Configuration Store.
In this tutorial, you learn how to:
<dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
- <version>2.0.0</version>
+ <version>2.6.0</version>
</dependency> <!-- Adds the Ability to Push Refresh -->
In this tutorial, you learn how to:
</dependency> ```
-1. Setup [Maven App Service Deployment](../app-service/quickstart-java.md?tabs=javase) so the application can be deployed to Azure App Service via Maven.
+1. Set up [Maven App Service Deployment](../app-service/quickstart-java.md?tabs=javase) so the application can be deployed to Azure App Service via Maven.
```console mvn com.microsoft.azure:azure-webapp-maven-plugin:1.12.0:config
Event Grid Web Hooks require validation on creation. You can validate by followi
1. Click on `Create` to create the event subscription. When `Create` is selected a registration request for the Web Hook will be sent to your application. This is received by the Azure App Configuration client library, verified, and returns a valid response.
-1. Click on `Event Subscriptions` in the `Events` pane to validated that the subscription was created successfully.
+1. Click on `Event Subscriptions` in the `Events` pane to validate that the subscription was created successfully.
:::image type="content" source="./media/event-subscription-view-webhook.png" alt-text="Web Hook shows up in a table on the bottom of the page." :::
Event Grid Web Hooks require validation on creation. You can validate by followi
## Next steps
-In this tutorial, you enabled your Java app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial.
+In this tutorial, you enabled your Java app to dynamically refresh configuration settings from App Configuration. For further questions see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2180917), it has all of the details on how the Spring Cloud Azure App Configuration library works. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial.
> [!div class="nextstepaction"] > [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Howto Convert To The New Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-convert-to-the-new-spring-boot.md
ms.devlang: java
Previously updated : 07/08/2021 Last updated : 05/02/2022 # Convert to new App Configuration Spring Boot library
All of the Azure Spring Boot libraries have had their Group and Artifact IDs upd
<dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-appconfiguration-config</artifactId>
- <version>2.0.0-beta.2</version>
+ <version>2.6.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
- <version>2.0.0-beta.2</version>
+ <version>2.6.0</version>
</dependency> ```
az appconfig kv import -n your-stores-name -s file --format properties --label d
or use the Import/Export feature in the portal.
-When you are completely moved to the new version, you can removed the old keys by running:
+When you are completely moved to the new version, you can remove the old keys by running:
```azurecli az appconfig kv delete -n ConversionTest --key /application_dev/*
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
ms.devlang: java Previously updated : 06/25/2021 Last updated : 05/02/2022 #Customer intent: As an Spring Boot developer, I want to use feature flags to control feature availability quickly and confidently.
The Spring Boot Feature Management libraries extend the framework with comprehen
## Prerequisites * Azure subscription - [create one for free](https://azure.microsoft.com/free/)
-* A supported [Java Development Kit SDK](/java/azure/jdk) with version 8.
+* A supported [Java Development Kit SDK](/java/azure/jdk) with version 11.
* [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above. ## Create an App Configuration instance
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
<dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
- <version>2.0.0</version>
+ <version>2.6.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-feature-management-web</artifactId>
- <version>2.0.0</version>
+ <version>2.4.0</version>
</dependency> <dependency> <groupId>org.springframework.boot</groupId>
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
spring.cloud.azure.appconfiguration.stores[0].feature-flags.enabled=true ```
-1. In the App Configuration portal for your config store, select `Access keys` from the sidebar. Select the Read-only keys tab. Copy the value of the primary connection string.
+1. Set an environment variable named **APP_CONFIGURATION_CONNECTION_STRING**, and set it to the connection string to your App Configuration store. At the command line, run the following command and restart the command prompt to allow the change to take effect:
-1. Add the primary connection string as an environment variable using the variable name `APP_CONFIGURATION_CONNECTION_STRING`.
+ ### [Windows command prompt](#tab/windowscommandprompt)
-1. Open the main application Java file, and add `@EnableConfigurationProperties` to enable this feature.
+ To build and run the app locally using the Windows command prompt, run the following command:
- ```java
- package com.example.demo;
+ ```console
+ setx APP_CONFIGURATION_CONNECTION_STRING "connection-string-of-your-app-configuration-store"
+ ```
- import org.springframework.boot.SpringApplication;
- import org.springframework.boot.context.properties.ConfigurationProperties;
- import org.springframework.boot.context.properties.EnableConfigurationProperties;
- import org.springframework.boot.autoconfigure.SpringBootApplication;
+ Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly.
- @SpringBootApplication
- @EnableConfigurationProperties(MessageProperties.class)
- public class DemoApplication {
+ ### [PowerShell](#tab/powershell)
- public static void main(String[] args) {
- SpringApplication.run(DemoApplication.class, args);
- }
- }
+ If you use Windows PowerShell, run the following command:
+
+ ```azurepowershell
+ $Env:APP_CONFIGURATION_CONNECTION_STRING = "connection-string-of-your-app-configuration-store"
```
-1. Create a new Java file named *MessageProperties.java* in the package directory of your app.
+ ### [macOS](#tab/unix)
- ```java
- package com.example.demo;
+ If you use macOS, run the following command:
- import org.springframework.boot.context.properties.ConfigurationProperties;
- import org.springframework.context.annotation.Configuration;
+ ```console
+ export APP_CONFIGURATION_CONNECTION_STRING='connection-string-of-your-app-configuration-store'
+ ```
- @Configuration
- @ConfigurationProperties(prefix = "config")
- public class MessageProperties {
- private String message;
+ Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly.
- public String getMessage() {
- return message;
- }
+ ### [Linux](#tab/linux)
- public void setMessage(String message) {
- this.message = message;
- }
- }
+ If you use Linux, run the following command:
+
+ ```console
+ export APP_CONFIGURATION_CONNECTION_STRING='connection-string-of-your-app-configuration-store'
```
+ Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it is set properly.
+
+
+ 1. Create a new Java file named *HelloController.java* in the package directory of your app. ```java
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
<link rel="stylesheet" href="/css/main.css"> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous">
- <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
- <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
- <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
+ <script src="https://code.jquery.com/jquery-3.6.0.min.js" integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4=" crossorigin="anonymous"></script>
+ <script src="https://unpkg.com/@popperjs/core@2"></script>
+ <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js" integrity="sha384-ka7Sk0Gln4gmtz2MlQnikT1wXgYsOg+OMhuP+IlRH9sENBO0LRn5q+8nbTov4+1p" crossorigin="anonymous"></script>
</head> <body>
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
## Next steps
-In this quickstart, you created a new App Configuration store and used it to manage features in a Spring Boot web app via the [Feature Management libraries](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration).
+In this quickstart, you created a new App Configuration store and used it to manage features in a Spring Boot web app via the [Feature Management libraries](https://azure.github.io/azure-sdk-for-java/springboot.html).
+* Library [reference documentation](https://go.microsoft.com/fwlink/?linkid=2180917)
* Learn more about [feature management](./concept-feature-management.md). * [Manage feature flags](./manage-feature-flags.md). * [Use feature flags in a Spring Boot Core app](./use-feature-flags-spring-boot.md).
azure-app-configuration Quickstart Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-java-spring-app.md
Title: Quickstart to learn how to use Azure App Configuration
description: In this quickstart, create a Java Spring app with Azure App Configuration to centralize storage and management of application settings separate from your code. documentationcenter: ''-+ editor: '' ms.devlang: java Previously updated : 04/18/2020 Last updated : 05/02/2022 -+ #Customer intent: As a Java Spring developer, I want to manage all my app settings in one place. # Quickstart: Create a Java Spring app with Azure App Configuration
In this quickstart, you incorporate Azure App Configuration into a Java Spring a
## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 8.
+- A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 11.
- [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above. ## Create an App Configuration store
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
1. Open the *pom.xml* file in a text editor, and add the Spring Cloud Azure Config starter to the list of `<dependencies>`:
- **Spring Boot 2.4**
+ **Spring Boot 2.6**
```xml <dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-appconfiguration-config</artifactId>
- <version>2.0.0</version>
+ <version>2.6.0</version>
</dependency> ```
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
} ```
-1. Create a new file named `bootstrap.properties` under the resources directory of your app, and add the following lines to the file. Replace the sample values with the appropriate properties for your App Configuration store.
+1. Open the auto-generated unit test and update to disable Azure App Configuration, or it will try to load from the service when runnings unit tests.
+
+ ```java
+ package com.example.demo;
+
+ import org.junit.jupiter.api.Test;
+ import org.springframework.boot.test.context.SpringBootTest;
+
+ @SpringBootTest(properties = "spring.cloud.azure.appconfiguration.enabled=false")
+ class DemoApplicationTests {
+
+ @Test
+ void contextLoads() {
+ }
+
+ }
+ ```
+
+1. Create a new file named `bootstrap.properties` under the resources directory of your app, and add the following line to the file.
```CLI spring.cloud.azure.appconfiguration.stores[0].connection-string= ${APP_CONFIGURATION_CONNECTION_STRING}
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
## Build and run the app locally
-1. Build your Spring Boot application with Maven and run it, for example:
+1. Open command prompt to the root directory and run the following commands to build your Spring Boot application with Maven and run it.
```cmd mvn clean package
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
## Next steps
-In this quickstart, you created a new App Configuration store and used it with a Java Spring app. For more information, see [Spring on Azure](/java/azure/spring-framework/). To learn how to enable your Java Spring app to dynamically refresh configuration settings, continue to the next tutorial.
+In this quickstart, you created a new App Configuration store and used it with a Java Spring app. For more information, see [Spring on Azure](/java/azure/spring-framework/). For further questions see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2180917), it has all of the details on how the Spring Cloud Azure App Configuration library works. To learn how to enable your Java Spring app to dynamically refresh configuration settings, continue to the next tutorial.
> [!div class="nextstepaction"] > [Enable dynamic configuration](./enable-dynamic-configuration-java-spring-app.md)
azure-app-configuration Use Feature Flags Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-spring-boot.md
ms.devlang: java Previously updated : 06/25/2021 Last updated : 05/02/2022
The easiest way to connect your Spring Boot application to App Configuration is
<dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-feature-management-web</artifactId>
- <version>2.0.0</version>
+ <version>2.6.0</version>
</dependency> ```
The feature manager supports *application.yml* as a configuration source for fea
```yml feature-management:
- feature-set:
- feature-a: true
- feature-b: false
- feature-c:
- enabled-for:
- -
- name: Percentage
- parameters:
- value: 50
+ feature-a: true
+ feature-b: false
+ feature-c:
+ enabled-for:
+ -
+ name: PercentageFilter
+ parameters:
+ Value: 50
``` By convention, the `feature-management` section of this YML document is used for feature flag settings. The prior example shows three feature flags with their filters defined in the `EnabledFor` property: * `feature-a` is *on*. * `feature-b` is *off*.
-* `feature-c` specifies a filter named `Percentage` with a `parameters` property. `Percentage` is a configurable filter. In this example, `Percentage` specifies a 50-percent probability for the `feature-c` flag to be *on*.
+* `feature-c` specifies a filter named `PercentageFilter` with a `parameters` property. `PercentageFilter` is a configurable filter. In this example, `PercentageFilter` specifies a 50-percent probability for the `feature-c` flag to be *on*.
## Feature flag checks
public String getOldFeature() {
## Next steps
-In this tutorial, you learned how to implement feature flags in your Spring Boot application by using the `azure-spring-cloud-feature-management-web` libraries. For more information about feature management support in Spring Boot and App Configuration, see the following resources:
+In this tutorial, you learned how to implement feature flags in your Spring Boot application by using the `azure-spring-cloud-feature-management-web` libraries. For further questions see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2180917), it has all of the details on how the Spring Cloud Azure App Configuration library works.For more information about feature management support in Spring Boot and App Configuration, see the following resources:
* [Spring Boot feature flag sample code](./quickstart-feature-flag-spring-boot.md) * [Manage feature flags](./manage-feature-flags.md)
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
Title: Tutorial for using Azure App Configuration Key Vault references in a Java
description: In this tutorial, you learn how to use Azure App Configuration's Key Vault references from a Java Spring Boot app documentationcenter: ''-+ editor: '' ms.assetid:
ms.devlang: java Previously updated : 08/11/2020- Last updated : 05/02/2022+ #Customer intent: I want to update my Spring Boot application to reference values stored in Key Vault through App Configuration.
In this tutorial, you learn how to:
## Prerequisites * Azure subscription - [create one for free](https://azure.microsoft.com/free/)
-* A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 8.
+* A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 11.
* [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above. ## Create a vault
To add a secret to the vault, you need to take just a few additional steps. In t
## Next steps
-In this tutorial, you created an App Configuration key that references a value stored in Key Vault. To learn how to use feature flags in your Java Spring application, continue to the next tutorial.
+In this tutorial, you created an App Configuration key that references a value stored in Key Vault. For further questions see the [reference documentation](https://go.microsoft.com/fwlink/?linkid=2180917), it has all of the details on how the Spring Cloud Azure App Configuration library works. To learn how to use feature flags in your Java Spring application, continue to the next tutorial.
> [!div class="nextstepaction"] > [Managed identity integration](./quickstart-feature-flag-spring-boot.md)
azure-arc Reference Az Arcdata Ad Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-ad-connector.md
+
+ Title: az arcdata ad-connector
+
+description: Reference article for az arcdata ad-connector.
+++ Last updated : 05/02/2022+++++
+# az arcdata ad-connector
+
+Manage Active Directory authentication for Azure Arc data services.
+## Commands
+| Command | Description|
+| | |
+[az arcdata ad-connector create](#az-arcdata-ad-connector-create) | Create a new Active Directory connector.
+[az arcdata ad-connector update](#az-arcdata-ad-connector-update) | Update the settings of an existing Active Directory connector.
+[az arcdata ad-connector delete](#az-arcdata-ad-connector-delete) | Delete an existing Active Directory connector.
+[az arcdata ad-connector show](#az-arcdata-ad-connector-show) | Get the details of an existing Active Directory connector.
+## az arcdata ad-connector create
+Create a new Active Directory connector.
+```azurecli
+az arcdata ad-connector create
+```
+### Examples
+Ex 1 - Deploy a new Active Directory connector in indirect mode.
+```azurecli
+az arcdata ad-connector create --name arcadc --k8s-namespace arc --realm CONTOSO.LOCAL --account-provisioning manual --primary-ad-dc-hostname azdc01.contoso.local --secondary-ad-dc-hostnames "azdc02.contoso.local, azdc03.contoso.local" --netbios-domain-name CONTOSO --dns-domain-name contoso.local --nameserver-addresses 10.10.10.11,10.10.10.12,10.10.10.13 --dns-replicas 2 --prefer-k8s-dns false --use-k8s
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, table, tsv. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
+#### `--verbose`
+Increase logging verbosity. Use `--debug` for full debug logs.
+## az arcdata ad-connector update
+Update the settings of an existing Active Directory connector.
+```azurecli
+az arcdata ad-connector update
+```
+### Examples
+Ex 1 - Update an existing Active Directory connector in indirect mode.
+```azurecli
+az arcdata ad-connector update --name arcadc --k8s-namespace arc --primary-ad-dc-hostname azdc01.contoso.local --secondary-ad-dc-hostname "azdc02.contoso.local, azdc03.contoso.local" --nameserver-addresses 10.10.10.11,10.10.10.12,10.10.10.13 --dns-replicas 2 --prefer-k8s-dns false --use-k8s
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, table, tsv. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
+#### `--verbose`
+Increase logging verbosity. Use `--debug` for full debug logs.
+## az arcdata ad-connector delete
+Delete an existing Active Directory connector.
+```azurecli
+az arcdata ad-connector delete
+```
+### Examples
+Ex 1 - Delete an existing Active Directory connector in indirect mode.
+```azurecli
+az arcdata ad-connector delete --name arcadc --k8s-namespace arc --use-k8s
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, table, tsv. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
+#### `--verbose`
+Increase logging verbosity. Use `--debug` for full debug logs.
+## az arcdata ad-connector show
+Get the details of an existing Active Directory connector.
+```azurecli
+az arcdata ad-connector show
+```
+### Examples
+Ex 1 - Get an existing Active Directory connector in indirect mode.
+```azurecli
+az arcdata ad-connector show --name arcadc --k8s-namespace arc --use-k8s
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, table, tsv. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
+#### `--verbose`
+Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Arcdata Dc Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc-config.md
# az arcdata dc config+
+Configuration commands.
+ ## Commands | Command | Description| | | |
azure-arc Reference Az Arcdata Dc Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc-debug.md
# az arcdata dc debug+
+Debug data controller.
+ ## Commands | Command | Description| | | |
azure-arc Reference Az Arcdata Dc Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc-endpoint.md
# az arcdata dc endpoint+
+Endpoint commands.
+ ## Commands | Command | Description| | | |
azure-arc Reference Az Arcdata Dc Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc-status.md
# az arcdata dc status+
+Status commands.
## Commands | Command | Description| | | |
azure-arc Reference Az Arcdata Dc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-dc.md
# az arcdata dc+
+Create, delete, and manage data controllers.
## Commands | Command | Description| | | | [az arcdata dc create](#az-arcdata-dc-create) | Create data controller. [az arcdata dc upgrade](#az-arcdata-dc-upgrade) | Upgrade data controller.
+[az arcdata dc update](#az-arcdata-dc-update) | Update data controller.
[az arcdata dc list-upgrades](#az-arcdata-dc-list-upgrades) | List available upgrade versions. [az arcdata dc delete](#az-arcdata-dc-delete) | Delete data controller. [az arcdata dc endpoint](reference-az-arcdata-dc-endpoint.md) | Endpoint commands.
Output format. Allowed values: json, jsonc, table, tsv. Default: json.
JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples. #### `--verbose` Increase logging verbosity. Use `--debug` for full debug logs.
+## az arcdata dc update
+Updates the datacontroller to enable/disable auto uploading logs and metrics
+```azurecli
+az arcdata dc update
+```
+### Examples
+Data controller upgrade.
+```azurecli
+az arcdata dc update --auto-upload-logs true --auto-upload-metrics true --name dc-name --resource-group resource-group
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, table, tsv. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
+#### `--verbose`
+Increase logging verbosity. Use `--debug` for full debug logs.
## az arcdata dc list-upgrades Attempts to list versions that are available in the docker image registry for upgrade. - kube config is required on your system along with the following environment variables ['AZDATA_USERNAME', 'AZDATA_PASSWORD']. ```azurecli
azure-arc Reference Az Arcdata Resource Kind https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata-resource-kind.md
# az arcdata resource-kind+
+Resource-kind commands to define and template custom resources on your cluster.
## Commands | Command | Description| | | |
azure-arc Reference Az Arcdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-arcdata.md
## Commands | Command | Description| | | |
-[az arcdata dc](reference-az-arcdata-dc.md) | Create, delete, and manage data controllers.
-[az arcdata resource-kind](reference-az-arcdata-resource-kind.md) | Resource-kind commands to define and template custom resources on your cluster.
+|[az arcdata dc](reference-az-arcdata-dc.md) | Create, delete, and manage data controllers.
+|[az arcdata resource-kind](reference-az-arcdata-resource-kind.md) | Resource-kind commands to define and template custom resources on your cluster.
+|[az arcdata ad-connector](reference-az-arcdata-ad-connector.md) | Manage Active Directory authentication for Azure Arc data services.|
+ ## az sql mi-arc | Command | Description| | | |
-[az sql_mi-arc](reference-az-sql-mi-arc.md) | Manage Azure Arc-enabled SQL managed instances.
+|[az sql_mi-arc](reference-az-sql-mi-arc.md) | Manage Azure Arc-enabled SQL managed instances.
## az sql midb-arc | Command | Description| | | |
-[az sql midb-arc](reference-az-sql-midb-arc.md) | Manage databases for Azure Arc-enabled SQL managed instances.
+|[az sql midb-arc](reference-az-sql-midb-arc.md) | Manage databases for Azure Arc-enabled SQL managed instances.
+
+## sql instance-failover-group-arc
+| Command | Description|
+| | |
+|[az sql instance-failover-group-arc](reference-az-sql-instance-failover-group-arc.md) | Create or Delete a Failover Group.|
+ ## az postgres arc-server | Command | Description| | | |
-[az postgres arc-server](reference-az-postgres-arc-server.md) | Manage Azure Arc enabled PostgreSQL Hyperscale server groups.
+|[az postgres arc-server](reference-az-postgres-arc-server.md) | Manage Azure Arc enabled PostgreSQL Hyperscale server groups.
azure-arc Reference Az Postgres Arc Server Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-postgres-arc-server-endpoint.md
# az postgres arc-server endpoint+
+Manage Azure Arc enabled PostgreSQL Hyperscale server group endpoints.
## Commands | Command | Description| | | |
azure-arc Reference Az Postgres Arc Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-postgres-arc-server.md
# az postgres arc-server+
+Manage Azure Arc enabled PostgreSQL Hyperscale server groups.
## Commands | Command | Description| | | |
azure-arc Reference Az Sql Instance Failover Group Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-instance-failover-group-arc.md
+
+ Title: az sql instance-failover-group-arc
+
+description: Reference article for az sql instance-failover-group-arc.
+++ Last updated : 05/02/2022+++++
+# az sql instance-failover-group-arc
+
+Create or Delete a Failover Group.
+## Commands
+| Command | Description|
+| | |
+[az sql instance-failover-group-arc create](#az-sql-instance-failover-group-arc-create) | Create a failover group resource
+[az sql instance-failover-group-arc update](#az-sql-instance-failover-group-arc-update) | Update a failover group resource
+[az sql instance-failover-group-arc delete](#az-sql-instance-failover-group-arc-delete) | Delete a failover group resource on a SQL managed instance.
+[az sql instance-failover-group-arc show](#az-sql-instance-failover-group-arc-show) | show a failover group resource.
+## az sql instance-failover-group-arc create
+Create a failover group resource to create a distributed availability group
+```azurecli
+az sql instance-failover-group-arc create
+```
+### Examples
+Ex 1 - Create a failover group resource fogCr1 to create failover group by using shared name sharedName1 between sqlmi instance sqlmi1 and partner SQL managed instance sqlmi2. It requires partner sqlmi primary mirror partnerPrimary:5022 and partner sqlmi mirror endpoint certificate file ./sqlmi2.cer.
+```azurecli
+az sql instance-failover-group-arc create --name fogCr1 --shared-name sharedName1 --mi sqlmi1 --role primary --partner-mi sqlmi2 --partner-mirroring-url partnerPrimary:5022 --partner-mirroring-cert-file ./sqlmi2.cer --use-k8s
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, table, tsv. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
+#### `--verbose`
+Increase logging verbosity. Use `--debug` for full debug logs.
+## az sql instance-failover-group-arc update
+Update a failover group resource to change the role of distributed availability group
+```azurecli
+az sql instance-failover-group-arc update
+```
+### Examples
+Ex 1 - Update a failover group resource fogCr1 to secondary role from primary
+```azurecli
+az sql instance-failover-group-arc update --name fogCr1 --role secondary --use-k8s
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, table, tsv. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
+#### `--verbose`
+Increase logging verbosity. Use `--debug` for full debug logs.
+## az sql instance-failover-group-arc delete
+Delete a failover group resource on a SQL managed instance.
+```azurecli
+az sql instance-failover-group-arc delete
+```
+### Examples
+Ex 1 - delete failover group resources named fogCr1.
+```azurecli
+az sql instance-failover-group-arc delete --name fogCr1 --use-k8s
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, table, tsv. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
+#### `--verbose`
+Increase logging verbosity. Use `--debug` for full debug logs.
+## az sql instance-failover-group-arc show
+show a failover group resource.
+```azurecli
+az sql instance-failover-group-arc show
+```
+### Examples
+Ex 1 - show failover group resources named fogCr1.
+```azurecli
+az sql instance-failover-group-arc show --name fogCr1 --use-k8s
+```
+### Global Arguments
+#### `--debug`
+Increase logging verbosity to show all debug logs.
+#### `--help -h`
+Show this help message and exit.
+#### `--output -o`
+Output format. Allowed values: json, jsonc, table, tsv. Default: json.
+#### `--query -q`
+JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
+#### `--verbose`
+Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Sql Mi Arc Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-mi-arc-config.md
# az sql mi-arc config+
+Configuration commands.
## Commands | Command | Description| | | |
azure-arc Reference Az Sql Mi Arc Dag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-mi-arc-dag.md
- Title: az sql mi-arc dag reference-
-description: Reference article for az sql mi-arc dag commands.
--- Previously updated : 11/04/2021-----
-# az sql mi-arc dag
-## Commands
-| Command | Description|
-| | |
-[az sql mi-arc dag create](#az-sql-mi-arc-dag-create) | Create a distributed availability group custom resource
-[az sql mi-arc dag delete](#az-sql-mi-arc-dag-delete) | Delete a distributed availability group custom resource on a sqlmi instance.
-[az sql mi-arc dag show](#az-sql-mi-arc-dag-show) | show a distributed availability group custom resource.
-## az sql mi-arc dag create
-Create a distributed availability group custom resource to create a distributed availability group
-```azurecli
-az sql mi-arc dag create
-```
-### Examples
-Ex 1 - Create a distributed availability group custom resource dagCr1 to create distributed availability group dagName1 between local sqlmi instance sqlmi1 and remote sqlmi instance sqlmi2. It requires remote sqlmi primary mirror remotePrimary:5022 and remote sqlmi mirror endpoint certificate file ./sqlmi2.cer.
-```azurecli
-az sql mi-arc dag create --name dagCr1 --dag-name dagName1 --local-instance-name sqlmi1 --local-primary local --remote-instance-name sqlmi2 --remote-mirroring-url remotePrimary:5022 --remote-mirroring-cert-file ./sqlmi2.cer --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc dag delete
-Delete a distributed availability group custom resource on a sqlmi instance to delete a distributed availability group. It requires a custom resource name.
-```azurecli
-az sql mi-arc dag delete
-```
-### Examples
-Ex 1 - delete distributed availability group resources named dagCr1.
-```azurecli
-az sql mi-arc dag delete --name dagCr1 --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
-## az sql mi-arc dag show
-show a distributed availability group custom resource. It requires a custom resource name
-```azurecli
-az sql mi-arc dag show
-```
-### Examples
-Ex 1 - show distributed availability group resources named dagCr1.
-```azurecli
-az sql mi-arc dag show --name dagCr1 --use-k8s
-```
-### Global Arguments
-#### `--debug`
-Increase logging verbosity to show all debug logs.
-#### `--help -h`
-Show this help message and exit.
-#### `--output -o`
-Output format. Allowed values: json, jsonc, table, tsv. Default: json.
-#### `--query -q`
-JMESPath query string. See [http://jmespath.org/](http://jmespath.org) for more information and examples.
-#### `--verbose`
-Increase logging verbosity. Use `--debug` for full debug logs.
azure-arc Reference Az Sql Mi Arc Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-mi-arc-endpoint.md
# az sql mi-arc endpoint+
+View and manage SQL endpoints.
## Commands | Command | Description| | | |
azure-arc Reference Az Sql Mi Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-mi-arc.md
# az sql mi-arc+
+Manage Azure Arc-enabled SQL managed instances.
## Commands | Command | Description| | | |
[az sql mi-arc upgrade](#az-sql-mi-arc-upgrade) | Upgrade SQL managed instance. [az sql mi-arc list](#az-sql-mi-arc-list) | List SQL managed instances. [az sql mi-arc config](reference-az-sql-mi-arc-config.md) | Configuration commands.
-[az sql mi-arc dag](reference-az-sql-mi-arc-dag.md) | Create or Delete a Distributed Availability Group.
## az sql mi-arc create To set the password of the SQL managed instance, set the environment variable AZDATA_PASSWORD ```azurecli
Create a directly connected SQL managed instance.
```azurecli az sql mi-arc create --name name --resource-group group --location location --subscription subscription --custom-location custom-location ```
+Create an indirectly connected SQL managed instance with Active Directory authentication.
+```azurecli
+az sql mi-arc create --name contososqlmi --k8s-namespace arc --ad-connector-name arcadc --ad-connector-namespace arc --keytab-secret arcuser-keytab-secret --ad-account-name arcuser --primary-dns-name contososqlmi-primary.contoso.local --primary-port-number 81433 --use-k8s
+```
### Global Arguments #### `--debug` Increase logging verbosity to show all debug logs.
az sql mi-arc upgrade
### Examples Upgrade SQL managed instance. ```azurecli
-az sql mi-arc upgrade -n sqlmi1 --k8s-namespace arc --desired-version v1.1.0 --use-k8s
+az sql mi-arc upgrade -n sqlmi1 -k arc --desired-version v1.1.0 --use-k8s
``` ### Global Arguments #### `--debug`
azure-arc Reference Az Sql Midb Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reference/reference-az-sql-midb-arc.md
# az sql midb-arc+
+Manage databases for Azure Arc-enabled SQL managed instances.
## Commands | Command | Description| | | | [az sql midb-arc restore](#az-sql-midb-arc-restore) | Restore a database to an Azure Arc enabled SQL managed instance. ## az sql midb-arc restore+ Restore a database to an Azure Arc enabled SQL managed instance.+ ```azurecli az sql midb-arc restore ```
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Previously updated : 04/06/2022 Last updated : 05/04/2022 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
-## April 2022
+## May 4, 2022
+
+This release is published May 4, 2022.
+
+### Image tag
+
+`v1.6.0_2022-05-02`
+
+For complete release version information, see [Version log](version-log.md).
+
+### Data controller
+
+Added:
+
+- Create, update, and delete AD connector
+- Create SQL Managed Instance with AD connectivity to the Azure CLI extension in direct connectivity mode.
+
+Data controller sends controller logs to the Log Analytics Workspace if logs upload is enabled.
+
+Removed the `--ad-connector-namespace` parameter from `az sql mi-arc create` command because for now the AD connector resource must always be in the same namespace as the SQL Managed Instance resource.
+
+Updated ElasticSearch to latest version `7.9.1-36fefbab37-205465`. Also Grafana, Kibana, Telegraf, Fluent Bit, Go.
+
+All container image sizes were reduced by approximately 40% on average.
+
+Introduced new `create-sql-keytab.ps1` PowerShell script to add in creation of keytabs.
+
+### SQL Managed Instance
+
+Separated the availability group and failover group status into two different sections on Kubernetes.
+
+Updated SQL engine binaries to the latest version.
+
+Add support for `NodeSelector`, `TopologySpreadConstraints` and `Affinity`. Only available through Kubernetes yaml/json file create/edit currently. No Azure CLI, Azure Portal, or Azure Data Studio user experience yet.
+
+Add support for specifying labels and annotations on the secondary service endpoint. `REQUIRED_SECONDARIES_TO_COMMIT` is now a function of the number of replicas.
+
+- If more than three replicas, then `REQUIRED_SECONDARIES_TO_COMMIT = 1`.
+- If one or two replicas, then `REQUIRED_SECONDARIES_TO_COMMIT = 0`.
+
+### User experience improvements
+
+Notifications added in Azure Portal if billing data has not been uploaded to Azure recently.
+
+#### Azure Data Studio
+
+Added upgrade experience for Data Controller in direct and indirect connectivity mode.
+
+## April 6, 2022
This release is published April 6, 2022.
Additional updates include:
- Issues with Python environments when using azdata in notebooks in Azure Data Studio resolved - The pg_audit extension is now available for PostgreSQL Hyperscale - A backup ID is no longer required when doing a full restore of a PostgreSQL Hyperscale database-- The status (health state) is reported for each of the PostgreSQL instances in a sever group
+- The status (health state) is reported for each of the PostgreSQL instances in a server group
In earlier releases, the status was aggregated at the server group level and not itemized at the PostgreSQL node level.
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
Previously updated : 03/09/2022 Last updated : 5/04/2022 # Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## May 4, 2022
+
+|Component |Value |
+|--||
+|Container images tag |`v1.6.0_2022-05-02`|
+|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v5</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v5</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2|
+|ARM API version|2022-03-01-preview|
+|`arcdata` Azure CLI extension version| 1.4.0|
+|Arc enabled Kubernetes helm chart extension version|1.2.19481002|
+|Arc Data extension for Azure Data Studio|1.2.0|
+ ## April 6, 2022 |Component |Value |
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Each metric includes two versions. One metric measures performance for the entir
| Connected Clients |The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections. | | Connections Created Per Second | The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. | | Connections Closed Per Second | The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. |
-| CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. |
+| CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. Note: This metric can be noisy due to low priority background security processes running on the node, so we recommend monitoring Server Load metric to track redis-server load.|
| Errors | Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could have more added in the future. The error types represented now are as follows: <br/><ul><li>**Failover** ΓÇô when a cache fails over (subordinate promotes to primary)</li><li>**Dataloss** ΓÇô when there's data loss on the cache</li><li>**UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough</li><li>**AOF** ΓÇô when there's an issue related to AOF persistence</li><li>**RDB** ΓÇô when there's an issue related to RDB persistence</li><li>**Import** ΓÇô when there's an issue related to Import RDB</li><li>**Export** ΓÇô when there's an issue related to Export RDB</li></ul> | | Evicted Keys |The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit. This number maps to `evicted_keys` from the Redis INFO command. | | Expired Keys |The number of items expired from the cache during the specified reporting interval. This value maps to `expired_keys` from the Redis INFO command.|
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
Title: Azure SQL input binding for Functions
description: Learn to use the Azure SQL input binding in Azure Functions. Previously updated : 4/1/2022 Last updated : 5/3/2022 -+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL input binding for Azure Functions (preview)
-The Azure SQL input binding retrieves data from a database and passes it to the input parameter of the function.
+When a function runs, the Azure SQL input binding retrieves data from a database and passes it to the input parameter of the function.
For information on setup and configuration details, see the [overview](./functions-bindings-azure-sql.md).
-<a id="example" name="example"></a>
+## Example
-# [C#](#tab/csharp)
+
+# [In-process](#tab/in-process)
This section contains the following examples:
The examples refer to a `ToDoItem` class and a corresponding database table:
<a id="http-trigger-look-up-id-from-query-string-c"></a>
-### HTTP trigger, look up ID from query string
- The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query. > [!NOTE]
namespace AzureSQLSamples
<a id="http-trigger-get-multiple-items-from-route-data-c"></a>
-### HTTP trigger, get multiple items from route data
- The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query. ```cs
namespace AzureSQLSamples
``` <a id="http-trigger-delete-one-or-multiple-rows-c"></a>
-### HTTP trigger, delete one or multiple rows
The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter. The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter. :::code language="csharp" source="~/functions-sql-todo-sample/DeleteToDo.cs" range="4-30":::
+# [Isolated process](#tab/isolated-process)
-# [JavaScript](#tab/javascript)
+Isolated process isn't currently supported.
-The Azure SQL binding for Azure Functions does not currently support JavaScript.
+<!-- Uncomment to support C# script examples.
+# [C# Script](#tab/csharp-script)
-# [Python](#tab/python)
+-->
+
-The Azure SQL binding for Azure Functions does not currently support Python.
-
+> [!NOTE]
+> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md).
-## Attributes and annotations
-# [C#](#tab/csharp)
+<!### Use these pivots when we get other non-C# languages added. ###
-In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute.
+
-The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
-Here's a `Sql` attribute example in a method signature:
-```csharp
- [FunctionName("GetToDoItems")]
- public static IActionResult Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")]
- HttpRequest req,
- [Sql("select * from dbo.ToDo where [Priority] > @Priority",
- CommandType = System.Data.CommandType.Text,
- Parameters = "@Priority={priority}",
- ConnectionStringSetting = "SqlConnectionString")]
- IEnumerable<ToDoItem> toDoItems)
- {
- ...
- }
-```
-# [JavaScript](#tab/javascript)
+>
-The Azure SQL binding for Azure Functions does not currently support JavaScript.
+## Attributes
-# [Python](#tab/python)
+In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute, which has the following properties:
-The Azure SQL binding for Azure Functions does not currently support Python.
+| Attribute property |Description|
+|||
+| **CommandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
+| **ConnectionStringSetting** | The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This isn't the actual connection string and must instead resolve to an environment variable. |
+| **CommandType** | A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **Parameters** | Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+<!### Use these pivots when we get other non-C# languages added. ###
+## Annotations
-
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@Sql` annotation on parameters whose value would come from Azure SQL. This annotation supports the following elements:
+| Element |Description|
+|||
+| **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
+| **connectionStringSetting** | The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This isn't the actual connection string and must instead resolve to an environment variable. |
+| **commandType** | A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **parameters** | Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+
+## Configuration
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the binding in the Azure portal. |
+|**name** | The name of the variable that represents the table or entity in function code. |
+| **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
+| **connectionStringSetting** | The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This isn't the actual connection string and must instead resolve to an environment variable. |
+| **commandType** | A [CommandType](/dotnet/api/system.data.commandtype) value, which is [Text](/dotnet/api/system.data.commandtype#fields) for a query and [StoredProcedure](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+| **parameters** | Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). |
+-->
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
+## Usage
++
+The attribute's constructor takes the SQL command text, the command type, parameters, and the connection string setting name. The command can be a Transact-SQL (T-SQL) query with the command type `System.Data.CommandType.Text` or stored procedure name with the command type `System.Data.CommandType.StoredProcedure`. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
## Next steps
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
Last updated 4/1/2022 -+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL output binding for Azure Functions (preview)
The Azure SQL output binding lets you write to a database.
For information on setup and configuration details, see the [overview](./functions-bindings-azure-sql.md).
+## Example
-<a id="example" name="example"></a>
-
-# [C#](#tab/csharp)
+# [In-process](#tab/in-process)
This section contains the following examples:
namespace AzureSQLSamples
} ```
-# [JavaScript](#tab/javascript)
-The Azure SQL binding for Azure Functions does not currently support JavaScript.
+# [Isolated process](#tab/isolated-process)
-# [Python](#tab/python)
+Isolated process isn't currently supported.
-The Azure SQL binding for Azure Functions does not currently support Python.
+<!-- Uncomment to support C# script examples.
+# [C# Script](#tab/csharp-script)
+-->
-## Attributes and annotations
-# [C#](#tab/csharp)
+> [!NOTE]
+> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md).
-In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute.
+<!### Use these pivots when we get other non-C# languages added. ###
-The attribute's constructor takes the SQL command text and the connection string setting name. For an output binding, the SQL command string is a table name where the data is to be stored. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
+
-Here's a `Sql` attribute example in a method signature:
-```csharp
- [FunctionName("HTTPtoSQL")]
- public static IActionResult Run(
- [HttpTrigger(AuthorizationLevel.Function, "get", Route = "addtodo")] HttpRequest req,
- [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] out ToDoItem newItem)
- {
- ...
- }
-```
-# [JavaScript](#tab/javascript)
-The Azure SQL binding for Azure Functions does not currently support JavaScript.
+>
-# [Python](#tab/python)
+## Attributes
-The Azure SQL binding for Azure Functions does not currently support Python.
+In [C# class libraries](functions-dotnet-class-library.md), use the [Sql](https://github.com/Azure/azure-functions-sql-extension/blob/main/src/SqlAttribute.cs) attribute, which has the following properties:
-
+| Attribute property |Description|
+|||
+| **CommandText** | Required. The name of the table being written to by the binding. |
+| **ConnectionStringSetting** | The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable. |
++
+<!### Use these pivots when we get other non-C# languages added. ###
+## Annotations
+In the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@Sql` annotation on parameters whose value would come from Azure SQL. This annotation supports the following elements:
+
+| Element |Description|
+|||
+| **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
+| **connectionStringSetting** | The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
+
+## Configuration
+
+The following table explains the binding configuration properties that you set in the *function.json* file.
+
+|function.json property | Description|
+||-|
+|**type** | Must be set to `table`. This property is set automatically when you create the binding in the Azure portal.|
+|**direction** | Must be set to `in`. This property is set automatically when you create the binding in the Azure portal. |
+|**name** | The name of the variable that represents the table or entity in function code. |
+| **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. |
+| **connectionStringSetting** | The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
+
+-->
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
+## Usage
+
+The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-3.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
## Next steps
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Title: Azure SQL bindings for Functions
description: Understand how to use Azure SQL bindings in Azure Functions. Previously updated : 4/1/2022 Last updated : 5/3/2022 -+
+zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL bindings for Azure Functions overview (preview)
This set of articles explains how to work with [Azure SQL](/azure/azure-sql/inde
| Read data from a database | [Input binding](./functions-bindings-azure-sql-input.md) | | Save data to a database |[Output binding](./functions-bindings-azure-sql-output.md) | +
+## Install extension
+
+The extension NuGet package you install depends on the C# mode you're using in your function app:
+
+# [In-process](#tab/in-process)
+
+Functions execute in the same process as the Functions host. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
+
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql).
+
+# [Isolated process](#tab/isolated-process)
+
+Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated process](dotnet-isolated-process-guide.md).
+ > [!NOTE]
-> This reference is for [Azure Functions version 2.x and higher](functions-versions.md).
->
-> This binding requires connectivity to an Azure SQL or SQL Server database.
+> In the current preview, Azure SQL bindings aren't supported when your function app runs in an isolated process.
-## Add to your Functions app
+<!--
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SignalRService/).
+-->
-### Functions
+<!-- awaiting bundle support
+# [C# script](#tab/csharp-script)
-Working with the trigger and bindings requires you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
+Functions run as C# script, which is supported primarily for C# portal editing. To update existing binding extensions for C# script apps running in the portal without having to republish your function app, see [Update your extensions].
-| Language | Add by... | Remarks
-|-||-|
-| C# | Installing the [preview NuGet package] | |
-<!--| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension] is recommended to use with Visual Studio Code. | -->
+You can install this version of the extension in your function app by registering the [extension bundle], version 3.x, or a later version.
+-->
+
-[preview NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql
-[core tools]: ./functions-run-local.md
-[extension bundle]: ./functions-bindings-register.md#extension-bundles
-[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
-## Known issues
-- Output bindings against tables with columns of data types `NTEXT`, `TEXT`, or `IMAGE` aren't supported and data upserts will fail. These types [will be removed](/sql/t-sql/data-types/ntext-text-and-image-transact-sql) in a future version of SQL Server and aren't compatible with the `OPENJSON` function used by this Azure Functions binding.
+> [!NOTE]
+> In the current preview, Azure SQL bindings are only supported by [C# class library functions](functions-dotnet-class-library.md).
+
+<!-- awaiting bundle support
+## Install bundle
+The Kafka extension is part of an [extension bundle], which is specified in your host.json project file. When you create a project that targets version 2.x or later, you should already have this bundle installed. To learn more, see [extension bundle].
-## Open source
+-->
-The Azure SQL bindings for Azure Functions are open-source and available on the repository at [https://github.com/Azure/azure-functions-sql-extension](https://github.com/Azure/azure-functions-sql-extension).
+
+## Considerations
+
+- Because the Azure SQL bindings doesn't have a trigger, you need to use another supported trigger to start a function that reads from or writes to an Azure SQL database.
+- Azure SQL binding supports version 2.x and later of the Functions runtime.
+- Source code for the Azure SQL bindings can be found in [this GitHub repository](https://github.com/Azure/azure-functions-sql-extension).
+- This binding requires connectivity to an Azure SQL or SQL Server database.
+- Output bindings against tables with columns of data types `NTEXT`, `TEXT`, or `IMAGE` aren't supported and data upserts will fail. These types [will be removed](/sql/t-sql/data-types/ntext-text-and-image-transact-sql) in a future version of SQL Server and aren't compatible with the `OPENJSON` function used by this Azure Functions binding.
## Next steps
The Azure SQL bindings for Azure Functions are open-source and available on the
- [Save data to a database (Output binding)](./functions-bindings-azure-sql-output.md) - [Review ToDo API sample with Azure SQL bindings](/samples/azure-samples/azure-sql-binding-func-dotnet-todo/todo-backend-dotnet-azure-sql-bindings-azure-functions/) - [Learn how to connect Azure Function to Azure SQL with managed identity](./functions-identity-access-azure-sql-with-managed-identity.md)-- [Use SQL bindings in Azure Stream Analytics](../stream-analytics/sql-database-upsert.md#option-1-update-by-key-with-the-azure-function-sql-binding)
+- [Use SQL bindings in Azure Stream Analytics](../stream-analytics/sql-database-upsert.md#option-1-update-by-key-with-the-azure-function-sql-binding)
+
+[preview NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql
+[core tools]: ./functions-run-local.md
+[extension bundle]: ./functions-bindings-register.md#extension-bundles
+[Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python" ## Configuration
-The following table explains the binding configuration properties that you set in the *function.json* file and the `Table` attribute.
+The following table explains the binding configuration properties that you set in the *function.json* file.
|function.json property | Description| ||-|
The following table explains the binding configuration properties that you set i
|**filter** | Optional. An OData filter expression for the entities to return from the table. Can't be used with `rowKey`.| |**connection** | The name of an app setting or setting collection that specifies how to connect to the table service. See [Connections](#connections). | ::: zone-end [!INCLUDE [functions-table-connections](../../includes/functions-table-connections.md)]
azure-functions Functions Identity Access Azure Sql With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-access-azure-sql-with-managed-identity.md
In this step we'll connect to the SQL database with an Azure AD user account and
In the final step we'll configure the Azure Function SQL connection string to use Azure AD managed identity authentication.
-The connection string setting name is identified in our Functions code as the binding attribute "ConnectionStringSetting", as seen in the SQL input binding [attributes and annotations](./functions-bindings-azure-sql-input.md?tabs=csharp#attributes-and-annotations).
+The connection string setting name is identified in our Functions code as the binding attribute "ConnectionStringSetting", as seen in the SQL input binding [attributes and annotations](./functions-bindings-azure-sql-input.md?pivots=programming-language-csharp#attributes).
In the application settings of our Function App the SQL connection string setting should be updated to follow this format:
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
A pre-upgrade validator is available to help identify potential issues when migr
1. In *Search for common problems or tools*, enter and select **Functions 4.x Pre-Upgrade Validator**
-To migrate an app from 3.x to 4.x, set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` with the following Azure CLI or Azure PowerShell commands:
+Once you have validated that the app can be upgraded, you can begin the process of migration. See the subsections below for instructions for [migration without slots](#migration-without-slots) and [migration with slots](#migration-with-slots).
+
+> [!NOTE]
+> If you are using a slot to manage the migration, you will need to set the `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` application setting to "0" on _both_ slots. This allows the version changes you make to be included in the slot swap operation. You can then upgrade your staging (non-production) slot, and then you can perform the swap.
+
+To migrate an app from 3.x to 4.x, you will:
+
+- Set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4`
+- **For Windows function apps only**, enable .NET 6.0 through the `netFrameworkVersion` setting
+
+##### Migration without slots
+
+You can use the following Azure CLI or Azure PowerShell commands to perform this upgrade directly on a site without slots:
# [Azure CLI](#tab/azure-cli) ```azurecli
-az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -n <APP_NAME> -g <RESOURCE_GROUP_NAME>
+az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
# For Windows function apps only, also enable .NET 6.0 that is needed by the runtime
-az functionapp config set --net-framework-version v6.0 -n <APP_NAME> -g <RESOURCE_GROUP_NAME>
+az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
``` # [Azure PowerShell](#tab/azure-powershell)
Set-AzWebApp -NetFrameworkVersion v6.0 -Name <APP_NAME> -ResourceGroupName <RESO
+##### Migration with slots
+
+You can use the following Azure CLI commands to perform this upgrade using deployment slots:
+
+First, update the production slot with `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0`. If your app can tolerate a restart (which impacts availability), it is recommended that you update the setting directly on the production slot, possibly at a time of lower traffic. If you instead choose to swap this setting into place, you should immediately update the staging slot after the swap. A consequence of swapping when only staging has `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` is that it will remove the `FUNCTIONS_EXTENSION_VERSION` setting in staging, putting the slot into a bad state. Updating the staging slot with a version right after the swap enables you to roll your changes back if necessary. However, in such a situation, you should still be prepared to directly update settings on production to remove `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` before the swap back.
+
+```azurecli
+# Update production with WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS
+az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
+
+# OR
+
+# Alternatively get production prepared with WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS via a swap
+az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+# The swap actions should be accompanied with a version specification for the slot. You may see errors from staging during the time between these actions.
+az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
+az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~3 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+```
+
+After the production slot has `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` configured, you can configure everything else in the staging slot and then swap:
+
+```azurecli
+# Get staging configured with WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS
+az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+# Get staging configured with the new extension version
+az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+# For Windows function apps only, also enable .NET 6.0 that is needed by the runtime
+az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+
+# Be sure to confirm that your staging environment is working as expected before swapping.
+
+# Swap to migrate production to the new version
+az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
+```
+ ### Breaking changes between 3.x and 4.x The following are some changes to be aware of before upgrading a 3.x app to 4.x. For a full list, see Azure Functions GitHub issues labeled [*Breaking Change: Approved*](https://github.com/Azure/azure-functions/issues?q=is%3Aissue+label%3A%22Breaking+Change%3A+Approved%22+is%3A%22closed+OR+open%22). More changes are expected during the preview period. Subscribe to [App Service Announcements](https://github.com/Azure/app-service-announcements/issues) for updates.
azure-monitor Apm Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/apm-tables.md
- Title: Azure Monitor Application Insights workspace-based resource schema
-description: Learn about the new table structure and schema for Azure Monitor Application Insights workspace-based resources.
- Previously updated : 05/09/2020--
-# Workspace-based resource changes
-
-Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). With workspace-based Application Insights resources data is stored in a Log Analytics workspace with other monitoring data and application data. This simplifies your configuration by allowing you to more easily analyze data across multiple solutions and to leverage the capabilities of workspaces.
-
-## Classic data structure
-The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data is not stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
-
-> [!NOTE]
-> The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](../app/apm-tables.md), you must first go to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
-
-[![Diagram that shows the Azure Monitor Logs structure for Application Insights.](../logs/media/data-platform-logs/logs-structure-ai.png)](../logs/media/data-platform-logs/logs-structure-ai.png#lightbox)
-
-## Table structure
-
-| Legacy table name | New table name | Description |
-|:|:|:|
-| availabilityResults | AppAvailabilityResults | Summary data from availability tests.|
-| browserTimings | AppBrowserTimings | Data about client performance, such as the time taken to process the incoming data.|
-| dependencies | AppDependencies | Calls from the application to other components (including external components) recorded via TrackDependency() ΓÇô for example, calls to REST API, database or a file system. |
-| customEvents | AppEvents | Custom events created by your application. |
-| customMetrics | AppMetrics | Custom metrics created by your application. |
-| pageViews | AppPageViews| Data about each website view with browser information. |
-| performanceCounters | AppPerformanceCounters | Performance measurements from the compute resources supporting the application, for example, Windows performance counters. |
-| requests | AppRequests | Requests received by your application. For example, a separate request record is logged for each HTTP request that your web app receives. |
-| exceptions | AppExceptions | Exceptions thrown by the application runtime, captures both server side and client-side (browsers) exceptions. |
-| traces | AppTraces | Detailed logs (traces) emitted through application code/logging frameworks recorded via TrackTrace(). |
-
-## Table schemas
-
-The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries using legacy tables.
-
-Most of the columns have the same name with different capitalization. Since KQL is case-sensitive, you will need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it is a workspace-based resource. The new property names are required for when querying from within the context of the Log Analytics workspace experience.
-
-### AppAvailabilityResults
-
-Legacy table: availability
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|customDimensions|dynamic|Properties|Dynamic|
-|customMeasurements|dynamic|Measurements|Dynamic|
-|duration|real|DurationMs|real|
-|`id`|string|`Id`|string|
-|iKey|string|IKey|string|
-|itemCount|int|ItemCount|int|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|String|
-|location|string|Location|string|
-|message|string|Message|string|
-|name|string|Name|string|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|performanceBucket|string|PerformanceBucket|string|
-|sdkVersion|string|SdkVersion|string|
-|session_Id|string|SessionId|string|
-|size|real|Size|real|
-|success|string|Success|Bool|
-|timestamp|datetime|TimeGenerated|datetime|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-
-### AppBrowserTimings
-
-Legacy table: browserTimings
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|customDimensions|dynamic|Properties|Dynamic|
-|customMeasurements|dynamic|Measurements|Dynamic|
-|iKey|string|IKey|string|
-|itemCount|int|ItemCount|int|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|string|
-|name|string|Name|datetime|
-|networkDuration|real|NetworkDurationMs|real|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|performanceBucket|string|PerformanceBucket|string|
-|processingDuration|real|ProcessingDurationMs|real|
-|receiveDuration|real|ReceiveDurationMs|real|
-|sdkVersion|string|SdkVersion|string|
-|sendDuration|real|SendDurationMs|real|
-|session_Id|string|SessionId|string|
-|timestamp|datetime|TimeGenerated|datetime|
-|totalDuration|real|TotalDurationMs|real|
-|url|string|Url|string|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-
-### AppDependencies
-
-Legacy table: dependencies
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|customDimensions|dynamic|Properties|Dynamic|
-|customMeasurements|dynamic|Measurements|Dynamic|
-|data|string|Data|string|
-|duration|real|DurationMs|real|
-|`id`|string|`Id`|string|
-|iKey|string|IKey|string|
-|itemCount|int|ItemCount|int|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|String|
-|name|string|Name|string|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|performanceBucket|string|PerformanceBucket|string|
-|resultCode|string|ResultCode|string|
-|sdkVersion|string|SdkVersion|string|
-|session_Id|string|SessionId|string|
-|success|string|Success|Bool|
-|target|string|Target|string|
-|timestamp|datetime|TimeGenerated|datetime|
-|type|string|DependencyType|string|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-
-### AppEvents
-
-Legacy table: customEvents
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|customDimensions|dynamic|Properties|Dynamic|
-|customMeasurements|dynamic|Measurements|Dynamic|
-|iKey|string|IKey|string|
-|itemCount|int|ItemCount|int|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|string|
-|name|string|Name|string|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|sdkVersion|string|SdkVersion|string|
-|session_Id|string|SessionId|string|
-|timestamp|datetime|TimeGenerated|datetime|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-
-### AppMetrics
-
-Legacy table: customMetrics
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|customDimensions|dynamic|Properties|Dynamic|
-|iKey|string|IKey|string|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|string|
-|name|string|Name|string|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|sdkVersion|string|SdkVersion|string|
-|session_Id|string|SessionId|string|
-|timestamp|datetime|TimeGenerated|datetime|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-|value|real|(removed)||
-|valueCount|int|ValueCount|int|
-|valueMax|real|ValueMax|real|
-|valueMin|real|ValueMin|real|
-|valueStdDev|real|ValueStdDev|real|
-|valueSum|real|ValueSum|real|
-
-### AppPageViews
-
-Legacy table: pageViews
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|customDimensions|dynamic|Properties|Dynamic|
-|customMeasurements|dynamic|Measurements|Dynamic|
-|duration|real|DurationMs|real|
-|`id`|string|`Id`|string|
-|iKey|string|IKey|string|
-|itemCount|int|ItemCount|int|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|String|
-|name|string|Name|string|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|performanceBucket|string|PerformanceBucket|string|
-|sdkVersion|string|SdkVersion|string|
-|session_Id|string|SessionId|string|
-|timestamp|datetime|TimeGenerated|datetime|
-|url|string|Url|string|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-
-### AppPerformanceCounters
-
-Legacy table: performanceCounters
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|category|string|Category|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|counter|string|(removed)||
-|customDimensions|dynamic|Properties|Dynamic|
-|iKey|string|IKey|string|
-|instance|string|Instance|string|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|string|
-|name|string|Name|string|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|sdkVersion|string|SdkVersion|string|
-|session_Id|string|SessionId|string|
-|timestamp|datetime|TimeGenerated|datetime|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-|value|real|Value|real|
-
-### AppRequests
-
-Legacy table: requests
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|customDimensions|dynamic|Properties|Dynamic|
-|customMeasurements|dynamic|Measurements|Dynamic|
-|duration|real|DurationMs|Real|
-|`id`|string|`Id`|String|
-|iKey|string|IKey|string|
-|itemCount|int|ItemCount|int|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|String|
-|name|string|Name|String|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|performanceBucket|string|PerformanceBucket|String|
-|resultCode|string|ResultCode|String|
-|sdkVersion|string|SdkVersion|string|
-|session_Id|string|SessionId|string|
-|source|string|Source|String|
-|success|string|Success|Bool|
-|timestamp|datetime|TimeGenerated|datetime|
-|url|string|Url|String|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-
-### AppExceptions
-
-Legacy table: exceptions
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|assembly|string|Assembly|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|customDimensions|dynamic|Properties|dynamic|
-|customMeasurements|dynamic|Measurements|dynamic|
-|details|dynamic|Details|dynamic|
-|handledAt|string|HandledAt|string|
-|iKey|string|IKey|string|
-|innermostAssembly|string|InnermostAssembly|string|
-|innermostMessage|string|InnermostMessage|string|
-|innermostMethod|string|InnermostMethod|string|
-|innermostType|string|InnermostType|string|
-|itemCount|int|ItemCount|int|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|string|
-|message|string|Message|string|
-|method|string|Method|string|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|outerAssembly|string|OuterAssembly|string|
-|outerMessage|string|OuterMessage|string|
-|outerMethod|string|OuterMethod|string|
-|outerType|string|OuterType|string|
-|problemId|string|ProblemId|string|
-|sdkVersion|string|SdkVersion|string|
-|session_Id|string|SessionId|string|
-|severityLevel|int|SeverityLevel|int|
-|timestamp|datetime|TimeGenerated|datetime|
-|type|string|ExceptionType|string|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-
-### AppTraces
-
-Legacy table: traces
-
-|ApplicationInsights|Type|LogAnalytics|Type|
-|:|:|:|:|
-|appId|string|\_ResourceGUID|string|
-|application_Version|string|AppVersion|string|
-|appName|string|\_ResourceId|string|
-|client_Browser|string|ClientBrowser|string|
-|client_City|string|ClientCity|string|
-|client_CountryOrRegion|string|ClientCountryOrRegion|string|
-|client_IP|string|ClientIP|string|
-|client_Model|string|ClientModel|string|
-|client_OS|string|ClientOS|string|
-|client_StateOrProvince|string|ClientStateOrProvince|string|
-|client_Type|string|ClientType|string|
-|cloud_RoleInstance|string|AppRoleInstance|string|
-|cloud_RoleName|string|AppRoleName|string|
-|customDimensions|dynamic|Properties|dynamic|
-|customMeasurements|dynamic|Measurements|dynamic|
-|iKey|string|IKey|string|
-|itemCount|int|ItemCount|int|
-|itemId|string|\_ItemId|string|
-|itemType|string|Type|string|
-|message|string|Message|string|
-|operation_Id|string|OperationId|string|
-|operation_Name|string|OperationName|string|
-|operation_ParentId|string|OperationParentId|string|
-|operation_SyntheticSource|string|OperationSyntheticSource|string|
-|sdkVersion|string|SdkVersion|string|
-|session_Id|string|SessionId|string|
-|severityLevel|int|SeverityLevel|int|
-|timestamp|datetime|TimeGenerated|datetime|
-|user_AccountId|string|UserAccountId|string|
-|user_AuthenticatedId|string|UserAuthenticatedId|string|
-|user_Id|string|UserId|string|
-
-## Next steps
-
-* [Explore metrics](../essentials/metrics-charts.md)
-* [Write Analytics queries](../logs/log-query-overview.md)
-
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Once the migration is complete, you can use [diagnostic settings](../essentials/
> - If the retention setting for your Application Insights instance under **Configure** > **Usage and estimated costs** > **Data Retention** is enabled, then use that setting to control the retention days for the telemetry data still saved in your classic resource's storage. - Understand [Workspace-based Application Insights](../logs/cost-logs.md#application-insights-billing) usage and costs.
+- Understand [Workspace-based resource changes](#workspace-based-resource-changes).
## Migrate your resource
Clicking the blue link text will take you to the associated Log Analytics worksp
We still provide full backwards compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience.
-To write queries against the [new workspace-based table structure/schema](apm-tables.md), you must first navigate to your Log Analytics workspace.
+To write queries against the [new workspace-based table structure/schema](#workspace-based-resource-changes), you must first navigate to your Log Analytics workspace.
-To ensure the queries successfully run, validate that the query's fields align with the [new schema fields](apm-tables.md#appmetrics).
+To ensure the queries successfully run, validate that the query's fields align with the [new schema fields](#appmetrics).
When you query directly from the Log Analytics UI within your workspace, you'll only see the data that is ingested post migration. To see both your classic Application Insights data + new data ingested after migration in a unified query experience use the Logs (Analytics) query view from within your migrated Application Insights resource.
You don't have to make any changes prior to migrating. This message alerts you t
You can check your current retention settings for Log Analytics under **General** > **Usage and estimated costs** > **Data Retention** from within the Log Analytics UI. This setting will affect how long any new ingested data is stored once you migrate your Application Insights resource.
+## Workspace-based resource changes
+
+Prior to the introduction of [workspace-based Application Insights resources](create-workspace-resource.md), Application Insights data was stored separate from other log data in Azure Monitor. Both are based on Azure Data Explorer and use the same Kusto Query Language (KQL). With workspace-based Application Insights resources data is stored in a Log Analytics workspace with other monitoring data and application data. This simplifies your configuration by allowing you to more easily analyze data across multiple solutions and to leverage the capabilities of workspaces.
+
+### Classic data structure
+The structure of a Log Analytics workspace is described in [Log Analytics workspace overview](../logs/log-analytics-workspace-overview.md). For a classic application, the data is not stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
+
+> [!NOTE]
+> The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](#table-structure), you must first go to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
+
+[![Diagram that shows the Azure Monitor Logs structure for Application Insights.](../logs/media/data-platform-logs/logs-structure-ai.png)](../logs/media/data-platform-logs/logs-structure-ai.png#lightbox)
+
+### Table structure
+
+| Legacy table name | New table name | Description |
+|:|:|:|
+| availabilityResults | AppAvailabilityResults | Summary data from availability tests.|
+| browserTimings | AppBrowserTimings | Data about client performance, such as the time taken to process the incoming data.|
+| dependencies | AppDependencies | Calls from the application to other components (including external components) recorded via TrackDependency() ΓÇô for example, calls to REST API, database or a file system. |
+| customEvents | AppEvents | Custom events created by your application. |
+| customMetrics | AppMetrics | Custom metrics created by your application. |
+| pageViews | AppPageViews| Data about each website view with browser information. |
+| performanceCounters | AppPerformanceCounters | Performance measurements from the compute resources supporting the application, for example, Windows performance counters. |
+| requests | AppRequests | Requests received by your application. For example, a separate request record is logged for each HTTP request that your web app receives. |
+| exceptions | AppExceptions | Exceptions thrown by the application runtime, captures both server side and client-side (browsers) exceptions. |
+| traces | AppTraces | Detailed logs (traces) emitted through application code/logging frameworks recorded via TrackTrace(). |
+
+### Table schemas
+
+The following sections show the mapping between the classic property names and the new workspace-based Application Insights property names. Use this information to convert any queries using legacy tables.
+
+Most of the columns have the same name with different capitalization. Since KQL is case-sensitive, you will need to change each column name along with the table names in existing queries. Columns with changes in addition to capitalization are highlighted. You can still use your classic Application Insights queries within the **Logs** pane of your Application Insights resource, even if it is a workspace-based resource. The new property names are required for when querying from within the context of the Log Analytics workspace experience.
+
+#### AppAvailabilityResults
+
+Legacy table: availability
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|customDimensions|dynamic|Properties|Dynamic|
+|customMeasurements|dynamic|Measurements|Dynamic|
+|duration|real|DurationMs|real|
+|`id`|string|`Id`|string|
+|iKey|string|IKey|string|
+|itemCount|int|ItemCount|int|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|String|
+|location|string|Location|string|
+|message|string|Message|string|
+|name|string|Name|string|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|performanceBucket|string|PerformanceBucket|string|
+|sdkVersion|string|SdkVersion|string|
+|session_Id|string|SessionId|string|
+|size|real|Size|real|
+|success|string|Success|Bool|
+|timestamp|datetime|TimeGenerated|datetime|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+
+#### AppBrowserTimings
+
+Legacy table: browserTimings
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|customDimensions|dynamic|Properties|Dynamic|
+|customMeasurements|dynamic|Measurements|Dynamic|
+|iKey|string|IKey|string|
+|itemCount|int|ItemCount|int|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|string|
+|name|string|Name|datetime|
+|networkDuration|real|NetworkDurationMs|real|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|performanceBucket|string|PerformanceBucket|string|
+|processingDuration|real|ProcessingDurationMs|real|
+|receiveDuration|real|ReceiveDurationMs|real|
+|sdkVersion|string|SdkVersion|string|
+|sendDuration|real|SendDurationMs|real|
+|session_Id|string|SessionId|string|
+|timestamp|datetime|TimeGenerated|datetime|
+|totalDuration|real|TotalDurationMs|real|
+|url|string|Url|string|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+
+#### AppDependencies
+
+Legacy table: dependencies
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|customDimensions|dynamic|Properties|Dynamic|
+|customMeasurements|dynamic|Measurements|Dynamic|
+|data|string|Data|string|
+|duration|real|DurationMs|real|
+|`id`|string|`Id`|string|
+|iKey|string|IKey|string|
+|itemCount|int|ItemCount|int|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|String|
+|name|string|Name|string|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|performanceBucket|string|PerformanceBucket|string|
+|resultCode|string|ResultCode|string|
+|sdkVersion|string|SdkVersion|string|
+|session_Id|string|SessionId|string|
+|success|string|Success|Bool|
+|target|string|Target|string|
+|timestamp|datetime|TimeGenerated|datetime|
+|type|string|DependencyType|string|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+
+#### AppEvents
+
+Legacy table: customEvents
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|customDimensions|dynamic|Properties|Dynamic|
+|customMeasurements|dynamic|Measurements|Dynamic|
+|iKey|string|IKey|string|
+|itemCount|int|ItemCount|int|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|string|
+|name|string|Name|string|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|sdkVersion|string|SdkVersion|string|
+|session_Id|string|SessionId|string|
+|timestamp|datetime|TimeGenerated|datetime|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+
+#### AppMetrics
+
+Legacy table: customMetrics
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|customDimensions|dynamic|Properties|Dynamic|
+|iKey|string|IKey|string|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|string|
+|name|string|Name|string|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|sdkVersion|string|SdkVersion|string|
+|session_Id|string|SessionId|string|
+|timestamp|datetime|TimeGenerated|datetime|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+|value|real|(removed)||
+|valueCount|int|ValueCount|int|
+|valueMax|real|ValueMax|real|
+|valueMin|real|ValueMin|real|
+|valueStdDev|real|ValueStdDev|real|
+|valueSum|real|ValueSum|real|
+
+#### AppPageViews
+
+Legacy table: pageViews
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|customDimensions|dynamic|Properties|Dynamic|
+|customMeasurements|dynamic|Measurements|Dynamic|
+|duration|real|DurationMs|real|
+|`id`|string|`Id`|string|
+|iKey|string|IKey|string|
+|itemCount|int|ItemCount|int|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|String|
+|name|string|Name|string|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|performanceBucket|string|PerformanceBucket|string|
+|sdkVersion|string|SdkVersion|string|
+|session_Id|string|SessionId|string|
+|timestamp|datetime|TimeGenerated|datetime|
+|url|string|Url|string|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+
+#### AppPerformanceCounters
+
+Legacy table: performanceCounters
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|category|string|Category|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|counter|string|(removed)||
+|customDimensions|dynamic|Properties|Dynamic|
+|iKey|string|IKey|string|
+|instance|string|Instance|string|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|string|
+|name|string|Name|string|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|sdkVersion|string|SdkVersion|string|
+|session_Id|string|SessionId|string|
+|timestamp|datetime|TimeGenerated|datetime|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+|value|real|Value|real|
+
+#### AppRequests
+
+Legacy table: requests
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|customDimensions|dynamic|Properties|Dynamic|
+|customMeasurements|dynamic|Measurements|Dynamic|
+|duration|real|DurationMs|Real|
+|`id`|string|`Id`|String|
+|iKey|string|IKey|string|
+|itemCount|int|ItemCount|int|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|String|
+|name|string|Name|String|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|performanceBucket|string|PerformanceBucket|String|
+|resultCode|string|ResultCode|String|
+|sdkVersion|string|SdkVersion|string|
+|session_Id|string|SessionId|string|
+|source|string|Source|String|
+|success|string|Success|Bool|
+|timestamp|datetime|TimeGenerated|datetime|
+|url|string|Url|String|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+
+#### AppExceptions
+
+Legacy table: exceptions
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|assembly|string|Assembly|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|customDimensions|dynamic|Properties|dynamic|
+|customMeasurements|dynamic|Measurements|dynamic|
+|details|dynamic|Details|dynamic|
+|handledAt|string|HandledAt|string|
+|iKey|string|IKey|string|
+|innermostAssembly|string|InnermostAssembly|string|
+|innermostMessage|string|InnermostMessage|string|
+|innermostMethod|string|InnermostMethod|string|
+|innermostType|string|InnermostType|string|
+|itemCount|int|ItemCount|int|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|string|
+|message|string|Message|string|
+|method|string|Method|string|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|outerAssembly|string|OuterAssembly|string|
+|outerMessage|string|OuterMessage|string|
+|outerMethod|string|OuterMethod|string|
+|outerType|string|OuterType|string|
+|problemId|string|ProblemId|string|
+|sdkVersion|string|SdkVersion|string|
+|session_Id|string|SessionId|string|
+|severityLevel|int|SeverityLevel|int|
+|timestamp|datetime|TimeGenerated|datetime|
+|type|string|ExceptionType|string|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+
+#### AppTraces
+
+Legacy table: traces
+
+|ApplicationInsights|Type|LogAnalytics|Type|
+|:|:|:|:|
+|appId|string|\_ResourceGUID|string|
+|application_Version|string|AppVersion|string|
+|appName|string|\_ResourceId|string|
+|client_Browser|string|ClientBrowser|string|
+|client_City|string|ClientCity|string|
+|client_CountryOrRegion|string|ClientCountryOrRegion|string|
+|client_IP|string|ClientIP|string|
+|client_Model|string|ClientModel|string|
+|client_OS|string|ClientOS|string|
+|client_StateOrProvince|string|ClientStateOrProvince|string|
+|client_Type|string|ClientType|string|
+|cloud_RoleInstance|string|AppRoleInstance|string|
+|cloud_RoleName|string|AppRoleName|string|
+|customDimensions|dynamic|Properties|dynamic|
+|customMeasurements|dynamic|Measurements|dynamic|
+|iKey|string|IKey|string|
+|itemCount|int|ItemCount|int|
+|itemId|string|\_ItemId|string|
+|itemType|string|Type|string|
+|message|string|Message|string|
+|operation_Id|string|OperationId|string|
+|operation_Name|string|OperationName|string|
+|operation_ParentId|string|OperationParentId|string|
+|operation_SyntheticSource|string|OperationSyntheticSource|string|
+|sdkVersion|string|SdkVersion|string|
+|session_Id|string|SessionId|string|
+|severityLevel|int|SeverityLevel|int|
+|timestamp|datetime|TimeGenerated|datetime|
+|user_AccountId|string|UserAccountId|string|
+|user_AuthenticatedId|string|UserAuthenticatedId|string|
+|user_Id|string|UserId|string|
+ ## Next steps * [Explore metrics](../essentials/metrics-charts.md)
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
Once your resource is created, you will see the corresponding workspace info in
Clicking the blue link text will take you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment. > [!NOTE]
-> We still provide full backwards compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience. To query/view against the [new workspace-based table structure/schema](apm-tables.md) you must first navigate to your Log Analytics workspace. Selecting **Logs (Analytics)** from within the Application Insights panes will give you access to the classic Application Insights query experience.
+> We still provide full backwards compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience. To query/view against the [new workspace-based table structure/schema](convert-classic-resource.md#workspace-based-resource-changes) you must first navigate to your Log Analytics workspace. Selecting **Logs (Analytics)** from within the Application Insights panes will give you access to the classic Application Insights query experience.
## Copy the connection string
azure-monitor Javascript Sdk Load Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-load-failure.md
To minimize intermittent network connectivity failure, we have implemented Cache
## Application Insights CDN outage
-You can confirm if there is an Application Insights CDN outage by attempting to access the CDN endpoint directly from the browser (for example, https://az416426.vo.msecnd.net/scripts/b/ai.2.min.js or https://js.monitor.azure.com/scripts/b/ai.2.min.js) from a different location than your end users' probably from your own development machine (assuming that your organization has not blocked this domain).
+You can confirm if there is an Application Insights CDN outage by attempting to access the CDN endpoint directly from the browser (for example, https://js.monitor.azure.com/scripts/b/ai.2.min.js) from a different location than your end users' probably from your own development machine (assuming that your organization has not blocked this domain).
-If you confirm there is an outage, you can [create a new support ticket](https://azure.microsoft.com/support/create-ticket/) or try changing the URL used to download the SDK.
-
-### Change the CDN endpoint
-
-As the snippet and its configuration are returned by your application as part of each generated page, you can change the snippet `src` configuration to use a different URL for the SDK. By using this approach, you could bypass the CDN blocked issue as the new URL should not be blocked.
-
-Current Application Insights JavaScript SDK CDN endpoints
-- `https://az416426.vo.msecnd.net/scripts/b/ai.2.min.js`-- `https://js.monitor.azure.com/scripts/b/ai.2.min.js`-
-> [!NOTE]
-> The `https://js.monitor.azure.com/` endpoint is an alias that allows us to switch between CDN providers within approximately 5 minutes, without the need for you to change any config. This is to enable us to fix detected CDN related issues more rapidly if a CDN provider is having regional or global issues without requiring everyone to adjust their settings.
+If you confirm there is an outage, you can [create a new support ticket](https://azure.microsoft.com/support/create-ticket/).
## SDK failed to initialize after loading the script
Depending on the frequency that the application, firewall, or environment update
If the CDN endpoint is identified as unsafe, [create a support ticket](https://azure.microsoft.com/support/create-ticket/) to ensure that the issue is resolved as soon as possible.
-To *potentially* bypass this issue more rapidly, you can [change the SDK CDN endpoint](#change-the-cdn-endpoint).
- ### Application Insights JavaScript CDN is blocked (by end user - blocked by browser; installed blocker; personal firewall) Check if your end users have:
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Find out about the performance and usage of your web page or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures, as well as users and session counts. All these can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can track how the different features of your web page application are used.
-Application Insights can be used with any web pages - you just add a short piece of JavaScript. If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
+Application Insights can be used with any web pages - you just add a short piece of JavaScript, Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
Application Insights can be used with any web pages - you just add a short piece
* [npm Setup](#npm-based-setup) * [JavaScript Snippet](#snippet-based-setup)
+> [!WARNING]
+> `@microsoft/applicationinsights-web-basic - AISKULight` does not support the use of connection strings.
+ > [!IMPORTANT] > Only use one method to add the JavaScript SDK to your application. If you use the NPM Setup, don't use the Snippet and vice versa.
azure-monitor Sql Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-troubleshoot.md
During preview of SQL Insights, you may encounter the following known issues.
* **'Login failed' error connecting to server or database**. Using certain special characters in SQL authentication passwords saved in the monitoring VM configuration or in Key Vault may prevent the monitoring VM from connecting to a SQL server or database. This set of characters includes parentheses, square and curly brackets, the dollar sign, forward and back slashes, and dot (`[ { ( ) } ] $ \ / .`). * Spaces in the database connection string attributes may be replaced with special characters, leading to database connection failures. For example, if the space in the `User Id` attribute is replaced with a special character, connections will fail with the **Login failed for user ''** error. To resolve, edit the monitoring profile configuration, and delete every special character appearing in place of a space. Some special characters may look indistinguishable from a space, thus you may want to delete every space character, type it again, and save the configuration.
+* Data collection and visualization may not work if the OS computer name of the monitoring VM is different from the monitoring VM name.
## Best practices
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Title: "What's new in Azure Monitor documentation" description: "What's new in Azure Monitor documentation" Previously updated : 04/04/2022 Last updated : 05/03/2022 # What's new in Azure Monitor documentation This article lists significant changes to Azure Monitor documentation.
+## April, 2022
+
+### General
+
+**New articles**
+
+- [Monitoring Azure Monitor data reference](azure-monitor-monitoring-reference.md)
+
+**Updated articles**
+
+- [Azure Monitor best practices - Analyze and visualize data](best-practices-analysis.md)
+
+### Agents
+
+**New articles**
+
+- [Azure Monitor agent extension versions](agents/azure-monitor-agent-extension-versions.md)
+- [Azure Monitor agent on Windows client devices (Preview)](agents/azure-monitor-agent-windows-client.md)
+- [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md)
++
+**Updated articles**
+
+- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md)
+- [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md)
+- [Overview of Azure Monitor agents](agents/agents-overview.md)
+
+### Alerts
+
+**Updated articles**
+
+- [Alerts on activity log](alerts/activity-log-alerts.md)
+- [Configure Azure to connect ITSM tools using Secure Webhook](alerts/itsm-connector-secure-webhook-connections-azure-configuration.md)
+- [Connect Azure to ITSM tools by using IT Service Management Solution](alerts/itsmc-definition.md)
+- [Connect Azure to ITSM tools by using Secure Webhook](alerts/it-service-management-connector-secure-webhook-connections.md)
+- [Create a metric alert with a Resource Manager template](alerts/alerts-metric-create-templates.md)
+- [Create, view, and manage log alerts using Azure Monitor](alerts/alerts-log.md)
+- [IT Service Management (ITSM) Integration](alerts/itsmc-overview.md)
+- [Log alerts in Azure Monitor](alerts/alerts-unified-log.md)
+- [Manage alert instances with unified alerts](alerts/alerts-managing-alert-instances.md)
+- [Troubleshoot problems in IT Service Management Connector](alerts/itsmc-troubleshoot-overview.md)
+
+### Application Insights
+
+**New articles**
+
+- [PageView telemetry: Application Insights data model](app/data-model-pageview-telemetry.md)
+- [Profile live Azure containers with Application Insights](app/profiler-containers.md)
+
+**Updated articles**
+
+- [Angular plugin for Application Insights JavaScript SDK](app/javascript-angular-plugin.md)
+- [Application Insights for web pages](app/javascript.md)
+- [Configure Application Insights Profiler](app/profiler-settings.md)
+- [Connection strings](app/sdk-connection-string.md)
+- [Live Metrics Stream: Monitor & Diagnose with 1-second latency](app/live-stream.md)
+- [Monitor your Node.js services and apps with Application Insights](app/nodejs.md)
+- [Profile production applications in Azure with Application Insights](app/profiler-overview.md)
+- [React Native plugin for Application Insights JavaScript SDK](app/javascript-react-native-plugin.md)
+- [React plugin for Application Insights JavaScript SDK](app/javascript-react-plugin.md)
+- [Tips for updating your JVM args - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)
+- [Troubleshooting no data - Application Insights for .NET/.NET Core](app/asp-net-troubleshoot-no-data.md)
+
+### Autoscale
+
+**Updated articles**
+
+- [Get started with Autoscale in Azure](autoscale/autoscale-get-started.md)
+
+### Essentials
+
+**Updated articles**
+
+- [Supported categories for Azure Monitor resource logs](essentials/resource-logs-categories.md)
+- [Supported metrics with Azure Monitor](essentials/metrics-supported.md)
+
+### Insights
+
+**Updated articles**
+
+- [Monitor Surface Hubs with Azure Monitor to track their health](insights/surface-hubs.md)
+
+### Logs
+
+**New articles**
+
+- [Collect and ingest data from a file using Data Collection Rules (DCR) (Preview)](logs/data-ingestion-from-file.md)
+
+**Updated articles**
+
+- [Azure Monitor Logs pricing details](logs/cost-logs.md)
+- [Log Analytics workspace data export in Azure Monitor](logs/logs-data-export.md)
+- [Tutorial: Send custom logs to Azure Monitor Logs using Resource Manager templates (preview)](logs/tutorial-custom-logs-api.md)
+
+### Visualizations
+
+**Updated articles**
+
+- [Monitor your Azure services in Grafana](visualize/grafana-plugin.md)
+ ## March, 2022 ### Agents
This article lists significant changes to Azure Monitor documentation.
- [Enable VM insights by using Azure Policy](vm/vminsights-enable-policy.md)
-## Visualizations
+### Visualizations
**Updated articles**
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following limits apply when you use Azure Resource Manager and Azure resourc
[!INCLUDE [azure-virtual-machines-limits-azure-resource-manager](../../../includes/azure-virtual-machines-limits-azure-resource-manager.md)]
-### Shared Image Gallery limits
+### Compute Gallery limits
-There are limits, per subscription, for deploying resources using Shared Image Galleries:
+There are limits, per subscription, for deploying resources using Compute Galleries:
-- 100 shared image galleries, per subscription, per region
+- 100 compute galleries, per subscription, per region
- 1,000 image definitions, per subscription, per region - 10,000 image versions, per subscription, per region
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 05/03/2022 Last updated : 05/04/2022 # Move operation support for resources
Jump to a resource provider namespace:
> | sharedvmextensions | No | No | No | > | sharedvmimages | No | No | No | > | sharedvmimages / versions | No | No | No |
-> | snapshots | Yes | Yes | No |
+> | snapshots | Yes - Full <br> No - Incremental | Yes - Full <br> No - Incremental | No - Full <br> No - Incremental |
> | sshpublickeys | No | No | No | > | virtualmachines | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs. | > | virtualmachines / extensions | Yes | Yes | No |
chaos-studio Chaos Studio Fault Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-providers.md
The following are the supported resource types for faults, the target types, and
| Microsoft.Compute/virtualMachineScaleSets (agent-based) | Microsoft-Agent | Reader | | Microsoft.Compute/virtualMachines (service-direct) | Microsoft-VirtualMachine | Virtual Machine Contributor | | Microsoft.Compute/virtualMachineScaleSets (service-direct) | Microsoft-VirtualMachineScaleSet | Virtual Machine Contributor |
-| Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | Azure Kubernetes Service Cluster User Role |
+| Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | Azure Kubernetes Service Cluster Admin Role |
| Microsoft.DocumentDb/databaseAccounts (CosmosDB, service-direct) | Microsoft-CosmosDB | Cosmos DB Operator | | Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | Key Vault Contributor | | Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | Network Contributor |
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-* Release notes for version `1.6.1`:
- * Added new languages: Ukrainian
+* Release notes for version `1.7.0`:
+ * Update dependencies
| Image Tags | Notes | ||:| | `latest` | |
-| `1.6.1-amd64-preview` | |
+| `1.7.0-amd64-preview` | |
# [Previous versions](#tab/previous) | Image Tags | Notes | ||:|
+| `1.6.1-amd64-preview` | |
| `1.5.0-amd64-preview` | | | `1.3.0-amd64-preview` | | | `1.2.0-amd64-preview` | |
communication-services Handle Calling Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/handle-calling-events.md
# Quickstart: Handle voice and video calling events Get started with Azure Communication Services by using Azure Event Grid to handle Communication Services voice and video calling events.
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Azure Container Apps deployments are powered by an Azure Resource Manager (ARM) template. Some Container Apps CLI commands also support using a YAML template to specify a resource. > [!NOTE]
-> Azure Container Apps resources are in the process of migrating from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
+> Azure Container Apps resources have migrated from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
## Container Apps environment
container-apps Connect Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/connect-apps.md
Previously updated : 11/02/2021 Last updated : 04/04/2022 - # Connect applications in Azure Container Apps Preview
Once you know a container app's domain name, then you can call the location with
A sample solution showing how you can call between containers using both the FQDN Location or Dapr can be found on [Azure Samples](https://github.com/Azure-Samples/container-apps-connect-multiple-apps)
+For more details about connecting Dapr applications, refer to [Invoke services using HTTP](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/howto-invoke-discover-services/).
+ ## Location A container app's location is composed of values associated with its environment, name, and region. Available through the `azurecontainerapps.io` top-level domain, the fully qualified domain name (FQDN) uses:
Developing microservices often requires you to implement patterns common to dist
A microservice that uses Dapr is available through the following URL pattern:
+```text
+http://localhost:3500/v1.0/invoke/<YOUR_APP_NAME>/method
+```
+ :::image type="content" source="media/connect-apps/azure-container-apps-location-dapr.png" alt-text="Azure Container Apps container app location with Dapr."::: ## Next steps
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
az extension add --name containerapp --upgrade
Now that the extension is installed, register the `Microsoft.App` namespace. > [!NOTE]
-> Azure Container Apps resources are in the process of migrating from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
+> Azure Container Apps resources have migrated from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
# [Bash](#tab/bash)
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
# Set scaling rules in Azure Container Apps
-Azure Container Apps manages automatic horizontal scaling through a set of declarative scaling rules. As a container app scales out, new instances of the container app are created on-demand. These instances are known as replicas.
+Azure Container Apps manages automatic horizontal scaling through a set of declarative scaling rules. As a container app scales out, new instances of the container app are created on-demand. These instances are known as replicas. When you first create a container app, the scale rule is set to zero. No charges are incurred when an application scales to zero.
-Scaling rules are defined in `resources.properties.template.scale` section of the [configuration](overview.md). There are two scale properties that apply to all rules in your container app.
+Scaling rules are defined in `resources.properties.template.scale` section of the JSON configuration file. When you add or edit existing scaling rules, a new revision of your container is automatically created with the new configuration. A revision is an immutable snapshot of your container app and it gets created automatically when certain aspects of your application are updated (scaling rules, Dapr settings, template configuration etc.). See the [Change types](./revisions.md#change-types) section to learn about the type of changes that do or don't trigger a new revision.
+
+There are two scale properties that apply to all rules in your container app:
| Scale property | Description | Default value | Min value | Max value | ||||||
Scaling rules are defined in `resources.properties.template.scale` section of th
- Individual scale rules are defined in the `rules` array. - If you want to ensure that an instance of your application is always running, set `minReplicas` to 1 or higher. - Replicas not processing, but that remain in memory are billed in the "idle charge" category.-- Changes to scaling rules are a [revision-scope](overview.md) change.
+- Changes to scaling rules are a [revision-scope](./revisions.md#revision-scope-changes) change.
- When using non-HTTP event scale rules, setting the `activeRevisionMode` to `single` is recommended. > [!IMPORTANT]
Scaling rules are defined in `resources.properties.template.scale` section of th
## Scale triggers
-Container Apps supports a large number of scale triggers. For more information about supported scale triggers, see [KEDA Scalers](https://keda.sh/docs/scalers/).
+Azure Container Apps supports the following scale triggers:
-The KEDA documentation shows code examples in YAML, while the Container Apps ARM template is in JSON. As you transform examples from KEDA for your needs, make sure to switch property names from [kebab](https://en.wikipedia.org/wiki/Naming_convention_(programming)#Delimiter-separated_words) case to [camel](https://en.wikipedia.org/wiki/Naming_convention_(programming)#Letter_case-separated_words) casing.
+- [HTTP traffic](#http): Scaling based on the number of concurrent HTTP requests to your revision.
+- [Event-driven](#event-driven): Event-based triggers such as messages in an Azure Service Bus.
+- [CPU](#cpu) or [Memory](#memory) usage: Scaling based on the amount of CPU or memory consumed by a replica.
## HTTP
With an HTTP scaling rule, you have control over the threshold that determines w
| Scale property | Description | Default value | Min value | Max value | ||||||
-| `concurrentRequests`| Once the number of requests exceeds this value, then more replicas are added, up to the `maxReplicas` amount. | 50 | 1 | n/a |
+| `concurrentRequests`| Once the number of requests exceeds this then another replica is added. Replicas will continue to be added up to the `maxReplicas` amount as the number of concurrent requests increase. | 100 | 1 | n/a |
+
+In the following example, the container app scales out up to five replicas and can scale down to zero. The scaling threshold is set to 100 concurrent requests per second.
```json {
With an HTTP scaling rule, you have control over the threshold that determines w
} ```
-In this example, the container app scales out up to five replicas and can scale down to zero instances. The scaling threshold is set to 100 concurrent requests per second.
+### Add an HTTP scale trigger to a Container App in single-revision mode
+
+> [!NOTE]
+> Revisions are immutable. Changing scale rules automatically generates a new revision.
+
+1. Open Azure portal, and navigate to your container app.
+
+1. Select **Scale**, then select your revision from the dropdown menu.
+
+ :::image type="content" source="media/scalers/scale-revisions.png" alt-text="A screenshot showing revisions scale.":::
+
+1. Select **Edit and deploy**.
+
+1. Select **Scale**, and then select **Add**.
+
+ :::image type="content" source="media/scalers/add-scale-rule.png" alt-text="A screenshot showing how to add a scale rule.":::
+
+1. Select **HTTP scaling** and enter a **Rule name** and the number of **Concurrent requests** for your scale rule and then select **Add**.
+
+ :::image type="content" source="media/scalers/http-scale-rule.png" alt-text="A screenshot showing how to add an h t t p scale rule.":::
+
+1. Select **Create** when you are done.
+
+ :::image type="content" source="media/scalers/create-http-scale-rule.png" alt-text="A screenshot showing the newly created http scale rule.":::
## Event-driven
Each event type features different properties in the `metadata` section of the K
The following example shows how to create a scale rule based on an [Azure Service Bus](https://keda.sh/docs/scalers/azure-service-bus/) trigger.
+The container app scales according to the following behavior:
+
+- For every 20 messages placed in the queue, a new replica is created.
+- The connection string to the queue is provided as a parameter to the configuration file and referenced via the `secretRef` property.
+ ```json { ...
The following example shows how to create a scale rule based on an [Azure Servic
} ```
-In this example, the container app scales according to the following behavior:
+> [!NOTE]
+> Upstream KEDA scale rules are defined using Kubernetes YAML, while Azure Container Apps supports ARM templates, Bicep Templates and Container Apps specific YAML. The following example uses an ARM template and therefore the rules need to switch property names from [kebab](https://en.wikipedia.org/wiki/Naming_convention_(programming)#Delimiter-separated_words) case to [camel](https://en.wikipedia.org/wiki/Naming_convention_(programming)#Letter_case-separated_words) when translating from existing KEDA manifests.
-- As the messages count in the queue exceeds 20, new replicas are created.-- The connection string to the queue is provided as a parameter to the configuration file and referenced via the `secretRef` property.
+### Set up a connection string secret
+
+To create a custom scale trigger, first create a connection string secret to authenticate with the different custom scalers.
+
+1. In Azure portal, navigate to your container app and then select **Secrets**.
+
+1. Select **Add**, and then enter your secret key/value information.
+
+1. Select **Add** when you are done.
+
+ :::image type="content" source="media/scalers/connection-string.png" alt-text="A screenshot showing how to create a connection string.":::
+
+### Add a custom scale trigger
+
+1. In Azure portal, select **Scale** and then select your revision from the dropdown menu.
+
+ :::image type="content" source="media/scalers/scale-revisions.png" alt-text="A screenshot showing the revisions scale page.":::
+
+1. Select **Edit and deploy**.
+
+1. Select **Scale**, and then select **Add**.
+
+ :::image type="content" source="media/scalers/add-scale-rule.png" alt-text="A screenshot showing how to add a scale rule.":::
+
+1. Enter a **Rule name**, select **Custom** and enter a **Custom rule type**. Enter your **Secret reference** and **Trigger parameter** and then add your **Metadata** parameters. select **Add** when you are done.
+
+ :::image type="content" source="media/scalers/custom-scaler.png" alt-text="A screenshot showing how to configure a custom scale rule.":::
+
+1. Select **Create** when you are done.
+
+> [!NOTE]
+> In multiple revision mode, adding a new scale trigger creates a new revision of your application but your old revision remains available with the old scale rules. Use the **Revision management** page to manage their traffic allocations.
+
+### KEDA scalers conversion
+
+Azure Container Apps supports KEDA ScaledObjects and all of the available [KEDA scalers](https://keda.sh/docs/scalers/). To convert KEDA templates, it's easier to start with a custom JSON template and add the parameters you need based on the scenario and the scale trigger you want to set up.
+
+```json
+{
+ ...
+ "resources": {
+ ...
+ "properties": {
+ "configuration": {
+ "secrets": [{
+ "name": "<YOUR_CONNECTION_STRING_NAME>",
+ "value": "<YOUR-CONNECTION-STRING>"
+ }],
+ },
+ "template": {
+ ...
+ "scale": {
+ "minReplicas": "0",
+ "maxReplicas": "10",
+ "rules": [
+ {
+ "name": "<YOUR_TRIGGER_NAME>",
+ "custom": {
+ "type": "<TRIGGER_TYPE>",
+ "metadata": {
+ },
+ "auth": [{
+ "secretRef": "<YOUR_CONNECTION_STRING_NAME>",
+ "triggerParameter": "<TRIGGER_PARAMETER>"
+ }]
+ }
+ }]
+}
+```
+
+The following is an example of setting up an [Azure Storage Queue](https://keda.sh/docs/scalers/azure-storage-queue/) scaler that you can configure to auto scale based on Azure Storage Queues.
+
+Below is the KEDA trigger specification for an Azure Storage Queue. To set up a scale rule in Azure Container Apps, you will need the trigger `type` and any other required parameters. You can also add other optional parameters which vary based on the scaler you are using.
+
+In this example, you need the `accountName` and the name of the cloud environment that the queue belongs to `cloud` to set up your scaler in Azure Container Apps.
+
+```yml
+triggers:
+- type: azure-queue
+ metadata:
+ queueName: orders
+ queueLength: '5'
+ connectionFromEnv: STORAGE_CONNECTIONSTRING_ENV_NAME
+ accountName: storage-account-name
+ cloud: AzureUSGovernmentCloud
+```
+
+Now your JSON config file should look like this:
+
+```json
+{
+ ...
+ "resources": {
+ ...
+ "properties": {
+ "configuration": {
+ "secrets": [{
+ "name": "my-connection-string",
+ "value": "*********"
+ }],
+ },
+ "template": {
+ ...
+ "scale": {
+ "minReplicas": "0",
+ "maxReplicas": "10",
+ "rules": [
+ {
+ "name": "queue-trigger",
+ "custom": {
+ "type": "azure-queue",
+ "metadata": {
+ "accountName": "my-storage-account-name",
+ "cloud": "AzurePublicCloud"
+ },
+ "auth": [{
+ "secretRef": "my-connection-string",
+ "triggerParameter": "connection"
+ }]
+ }
+ }]
+}
+```
+
+> [!NOTE]
+> KEDA ScaledJobs are not supported. See [KEDA scaling Jobs](https://keda.sh/docs/concepts/scaling-jobs/#overview) for more details.
## CPU
The following example shows how to create a memory scaling rule.
## Considerations - Vertical scaling is not supported.+ - Replica quantities are a target amount, not a guarantee. - Even if you set `maxReplicas` to `1`, there is no assurance of thread safety.
+- If you are using [Dapr actors](https://docs.dapr.io/developing-applications/building-blocks/actors/actors-overview/) to manage states, you should keep in mind that scaling to zero is not supported. Dapr uses virtual actors to manage asynchronous calls which means their in-memory representation is not tied to their identity or lifetime.
+ ## Next steps > [!div class="nextstepaction"]
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs.md
# How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-The SQL API in Azure Cosmos DB supports registering and invoking stored procedures, triggers, and user-defined functions (UDFs) written in JavaScript. You can use the SQL API [.NET](sql-api-sdk-dotnet.md), [.NET Core](sql-api-sdk-dotnet-core.md), [Java](sql-api-sdk-java.md), [JavaScript](sql-api-sdk-node.md), [Node.js](sql-api-sdk-node.md), or [Python](sql-api-sdk-python.md) SDKs to register and invoke the stored procedures. Once you have defined one or more stored procedures, triggers, and user-defined functions, you can load and view them in the [Azure portal](https://portal.azure.com/) by using Data Explorer.
+The SQL API in Azure Cosmos DB supports registering and invoking stored procedures, triggers, and user-defined functions (UDFs) written in JavaScript. Once you've defined one or more stored procedures, triggers, and user-defined functions, you can load and view them in the [Azure portal](https://portal.azure.com/) by using Data Explorer.
-## <a id="stored-procedures"></a>How to run stored procedures
+SQL API SDKs are available for a wide variety of platforms and programming languages. If you haven't worked
+
+You can use the SQL API SDK across multiple platforms including [.NET v2 (legacy)](sql-api-sdk-dotnet.md), [.NET v3](sql-api-sdk-dotnet-standard.md), [Java](sql-api-sdk-java.md), [JavaScript](sql-api-sdk-node.md), or [Python](sql-api-sdk-python.md) SDKs to perform these tasks. If you haven't worked with one of these SDKs before, see the *"Quickstart"* article for the appropriate SDK:
+
+| SDK | Getting started |
+| : | : |
+| .NET v3 | [Quickstart: Build a .NET console app to manage Azure Cosmos DB SQL API resources](create-sql-api-dotnet.md) |
+| Java | [Quickstart: Build a Java app to manage Azure Cosmos DB SQL API data](create-sql-api-java.md)
+| JavaScript | [Quickstart: Use Node.js to connect and query data from Azure Cosmos DB SQL API account](create-sql-api-nodejs.md) |
+| Python | [Quickstart: Build a Python application using an Azure Cosmos DB SQL API account](create-sql-api-python.md) |
+
+## How to run stored procedures
Stored procedures are written using JavaScript. They can create, update, read, query, and delete items within an Azure Cosmos container. For more information on how to write stored procedures in Azure Cosmos DB, see [How to write stored procedures in Azure Cosmos DB](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures) article.
The following examples show how to register and call a stored procedure by using
> [!NOTE] > For partitioned containers, when executing a stored procedure, a partition key value must be provided in the request options. Stored procedures are always scoped to a partition key. Items that have a different partition key value will not be visible to the stored procedure. This also applied to triggers as well.
-### Stored procedures - .NET SDK V2
+### [.NET SDK v2](#tab/dotnet-sdk-v2)
-The following example shows how to register a stored procedure by using the .NET SDK V2:
+The following example shows how to register a stored procedure by using the .NET SDK v2:
```csharp string storedProcedureId = "spCreateToDoItems";
var response = await client.CreateStoredProcedureAsync(containerUri, newStoredPr
StoredProcedure createdStoredProcedure = response.Resource; ```
-The following code shows how to call a stored procedure by using the .NET SDK V2:
+The following code shows how to call a stored procedure by using the .NET SDK v2:
```csharp dynamic[] newItems = new dynamic[]
RequestOptions options = new RequestOptions { PartitionKey = new PartitionKey("P
var result = await client.ExecuteStoredProcedureAsync<string>(uri, options, new[] { newItems }); ```
-### Stored procedures - .NET SDK V3
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
-The following example shows how to register a stored procedure by using the .NET SDK V3:
+The following example shows how to register a stored procedure by using the .NET SDK v3:
```csharp string storedProcedureId = "spCreateToDoItems";
StoredProcedureResponse storedProcedureResponse = await client.GetContainer("myD
}); ```
-The following code shows how to call a stored procedure by using the .NET SDK V3:
+The following code shows how to call a stored procedure by using the .NET SDK v3:
```csharp dynamic[] newItems = new dynamic[]
dynamic[] newItems = new dynamic[]
var result = await client.GetContainer("database", "container").Scripts.ExecuteStoredProcedureAsync<string>("spCreateToDoItem", new PartitionKey("Personal"), new[] { newItems }); ```
-### Stored procedures - Java SDK
+### [Java SDK](#tab/java-sdk)
The following example shows how to register a stored procedure by using the Java SDK:
asyncClient.executeStoredProcedure(sprocLink, requestOptions, storedProcedureArg
successfulCompletionLatch.await(); ```
-### Stored procedures - JavaScript SDK
+### [JavaScript SDK](#tab/javascript-sdk)
The following example shows how to register a stored procedure by using the JavaScript SDK
const sprocId = "spCreateToDoItems";
const {resource: result} = await container.scripts.storedProcedure(sprocId).execute(newItem, {partitionKey: newItem[0].category}); ```
-### Stored procedures - Python SDK
+### [Python SDK](#tab/python-sdk)
The following example shows how to register a stored procedure by using the Python SDK:
new_item = {
result = container.scripts.execute_stored_procedure(sproc=created_sproc,params=[[new_item]], partition_key=new_id) ```
-## <a id="pre-triggers"></a>How to run pre-triggers
++
+## How to run pre-triggers
The following examples show how to register and call a pre-trigger by using the Azure Cosmos DB SDKs. Refer to the [Pre-trigger example](how-to-write-stored-procedures-triggers-udfs.md#pre-triggers) as the source for this pre-trigger is saved as `trgPreValidateToDoItemTimestamp.js`.
-When executing, pre-triggers are passed in the RequestOptions object by specifying `PreTriggerInclude` and then passing the name of the trigger in a List object.
+Pre-triggers are passed in the RequestOptions object, when executing an operation, by specifying `PreTriggerInclude` and then passing the name of the trigger in a List object.
> [!NOTE] > Even though the name of the trigger is passed as a List, you can still execute only one trigger per operation.
-### Pre-triggers - .NET SDK V2
+### [.NET SDK v2](#tab/dotnet-sdk-v2)
-The following code shows how to register a pre-trigger using the .NET SDK V2:
+The following code shows how to register a pre-trigger using the .NET SDK v2:
```csharp string triggerId = "trgPreValidateToDoItemTimestamp";
Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myConta
await client.CreateTriggerAsync(containerUri, trigger); ```
-The following code shows how to call a pre-trigger using the .NET SDK V2:
+The following code shows how to call a pre-trigger using the .NET SDK v2:
```csharp dynamic newItem = new
RequestOptions requestOptions = new RequestOptions { PreTriggerInclude = new Lis
await client.CreateDocumentAsync(containerUri, newItem, requestOptions); ```
-### Pre-triggers - .NET SDK V3
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
-The following code shows how to register a pre-trigger using the .NET SDK V3:
+The following code shows how to register a pre-trigger using the .NET SDK v3:
```csharp await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(new TriggerProperties
await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(ne
}); ```
-The following code shows how to call a pre-trigger using the .NET SDK V3:
+The following code shows how to call a pre-trigger using the .NET SDK v3:
```csharp dynamic newItem = new
dynamic newItem = new
await client.GetContainer("database", "container").CreateItemAsync(newItem, null, new ItemRequestOptions { PreTriggers = new List<string> { "trgPreValidateToDoItemTimestamp" } }); ```
-### Pre-triggers - Java SDK
+### [Java SDK](#tab/java-sdk)
The following code shows how to register a pre-trigger using the Java SDK:
requestOptions.setPreTriggerInclude(Arrays.asList("trgPreValidateToDoItemTimesta
asyncClient.createDocument(containerLink, item, requestOptions, false).toBlocking(); ```
-### Pre-triggers - JavaScript SDK
+### [JavaScript SDK](#tab/javascript-sdk)
The following code shows how to register a pre-trigger using the JavaScript SDK:
await container.items.create({
}, {preTriggerInclude: [triggerId]}); ```
-### Pre-triggers - Python SDK
+### [Python SDK](#tab/python-sdk)
The following code shows how to register a pre-trigger using the Python SDK:
item = {'category': 'Personal', 'name': 'Groceries',
container.create_item(item, {'pre_trigger_include': 'trgPreValidateToDoItemTimestamp'}) ```
-## <a id="post-triggers"></a>How to run post-triggers
++
+## How to run post-triggers
The following examples show how to register a post-trigger by using the Azure Cosmos DB SDKs. Refer to the [Post-trigger example](how-to-write-stored-procedures-triggers-udfs.md#post-triggers) as the source for this post-trigger is saved as `trgPostUpdateMetadata.js`.
-### Post-triggers - .NET SDK V2
+### [.NET SDK v2](#tab/dotnet-sdk-v2)
-The following code shows how to register a post-trigger using the .NET SDK V2:
+The following code shows how to register a post-trigger using the .NET SDK v2:
```csharp string triggerId = "trgPostUpdateMetadata";
Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myConta
await client.CreateTriggerAsync(containerUri, trigger); ```
-The following code shows how to call a post-trigger using the .NET SDK V2:
+The following code shows how to call a post-trigger using the .NET SDK v2:
```csharp var newItem = {
Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myConta
await client.createDocumentAsync(containerUri, newItem, options); ```
-### Post-triggers - .NET SDK V3
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
-The following code shows how to register a post-trigger using the .NET SDK V3:
+The following code shows how to register a post-trigger using the .NET SDK v3:
```csharp await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(new TriggerProperties
await client.GetContainer("database", "container").Scripts.CreateTriggerAsync(ne
}); ```
-The following code shows how to call a post-trigger using the .NET SDK V3:
+The following code shows how to call a post-trigger using the .NET SDK v3:
```csharp var newItem = {
var newItem = {
await client.GetContainer("database", "container").CreateItemAsync(newItem, null, new ItemRequestOptions { PostTriggers = new List<string> { "trgPostUpdateMetadata" } }); ```
-### Post-triggers - Java SDK
+### [Java SDK](#tab/java-sdk)
The following code shows how to register a post-trigger using the Java SDK:
requestOptions.setPostTriggerInclude(Arrays.asList("trgPostUpdateMetadata"));
asyncClient.createDocument(containerLink, item, requestOptions, false).toBlocking(); ```
-### Post-triggers - JavaScript SDK
+### [JavaScript SDK](#tab/javascript-sdk)
The following code shows how to register a post-trigger using the JavaScript SDK:
const triggerId = "trgPostUpdateMetadata";
await container.items.create(item, {postTriggerInclude: [triggerId]}); ```
-### Post-triggers - Python SDK
+### [Python SDK](#tab/python-sdk)
The following code shows how to register a post-trigger using the Python SDK:
item = {'category': 'Personal', 'name': 'Groceries',
container.create_item(item, {'post_trigger_include': 'trgPreValidateToDoItemTimestamp'}) ```
-## <a id="udfs"></a>How to work with user-defined functions
++
+## How to work with user-defined functions
The following examples show how to register a user-defined function by using the Azure Cosmos DB SDKs. Refer to this [User-defined function example](how-to-write-stored-procedures-triggers-udfs.md#udfs) as the source for this post-trigger is saved as `udfTax.js`.
-### User-defined functions - .NET SDK V2
+### [.NET SDK v2](#tab/dotnet-sdk-v2)
-The following code shows how to register a user-defined function using the .NET SDK V2:
+The following code shows how to register a user-defined function using the .NET SDK v2:
```csharp string udfId = "Tax";
await client.CreateUserDefinedFunctionAsync(containerUri, udfTax);
```
-The following code shows how to call a user-defined function using the .NET SDK V2:
+The following code shows how to call a user-defined function using the .NET SDK v2:
```csharp Uri containerUri = UriFactory.CreateDocumentCollectionUri("myDatabase", "myContainer");
foreach (var result in results)
} ```
-### User-defined functions - .NET SDK V3
+### [.NET SDK v3](#tab/dotnet-sdk-v3)
-The following code shows how to register a user-defined function using the .NET SDK V3:
+The following code shows how to register a user-defined function using the .NET SDK v3:
```csharp await client.GetContainer("database", "container").Scripts.CreateUserDefinedFunctionAsync(new UserDefinedFunctionProperties
await client.GetContainer("database", "container").Scripts.CreateUserDefinedFunc
}); ```
-The following code shows how to call a user-defined function using the .NET SDK V3:
+The following code shows how to call a user-defined function using the .NET SDK v3:
```csharp var iterator = client.GetContainer("database", "container").GetItemQueryIterator<dynamic>("SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000");
while (iterator.HasMoreResults)
} ```
-### User-defined functions - Java SDK
+### [Java SDK](#tab/java-sdk)
The following code shows how to register a user-defined function using the Java SDK:
queryObservable.subscribe(
completionLatch.await(); ```
-### User-defined functions - JavaScript SDK
+### [JavaScript SDK](#tab/javascript-sdk)
The following code shows how to register a user-defined function using the JavaScript SDK:
const sql = "SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000";
const {result} = await container.items.query(sql).toArray(); ```
-### User-defined functions - Python SDK
+### [Python SDK](#tab/python-sdk)
The following code shows how to register a user-defined function using the Python SDK:
results = list(container.query_items(
'query': 'SELECT * FROM Incomes t WHERE udf.Tax(t.income) > 20000')) ``` ++ ## Next steps Learn more concepts and how-to write or use stored procedures, triggers, and user-defined functions in Azure Cosmos DB:
cosmos-db How To Write Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-write-javascript-query-api.md
See the following articles to learn about stored procedures, triggers, and user-
* [How to work with stored procedures, triggers, user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md)
-* [How to register and use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#stored-procedures)
+* [How to register and use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-stored-procedures)
-* How to register and use [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#post-triggers) in Azure Cosmos DB
+* How to register and use [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) in Azure Cosmos DB
-* [How to register and use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#udfs)
+* [How to register and use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-work-with-user-defined-functions)
* [Synthetic partition keys in Azure Cosmos DB](synthetic-partition-keys.md)
cosmos-db How To Write Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-write-stored-procedures-triggers-udfs.md
var helloWorldStoredProc = {
The context object provides access to all operations that can be performed in Azure Cosmos DB, as well as access to the request and response objects. In this case, you use the response object to set the body of the response to be sent back to the client.
-Once written, the stored procedure must be registered with a collection. To learn more, see [How to use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#stored-procedures) article.
+Once written, the stored procedure must be registered with a collection. To learn more, see [How to use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-stored-procedures) article.
### <a id="create-an-item"></a>Create an item using stored procedure
function async_sample() {
## <a id="triggers"></a>How to write triggers
-Azure Cosmos DB supports pre-triggers and post-triggers. Pre-triggers are executed before modifying a database item and post-triggers are executed after modifying a database item. Triggers are not automatically executed, they must be specified for each database operation where you want them to execute. After you define a trigger, you should [register and call a pre-trigger](how-to-use-stored-procedures-triggers-udfs.md#pre-triggers) by using the Azure Cosmos DB SDKs.
+Azure Cosmos DB supports pre-triggers and post-triggers. Pre-triggers are executed before modifying a database item and post-triggers are executed after modifying a database item. Triggers are not automatically executed, they must be specified for each database operation where you want them to execute. After you define a trigger, you should [register and call a pre-trigger](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) by using the Azure Cosmos DB SDKs.
### <a id="pre-triggers"></a>Pre-triggers
Pre-triggers cannot have any input parameters. The request object in the trigger
When triggers are registered, you can specify the operations that it can run with. This trigger should be created with a `TriggerOperation` value of `TriggerOperation.Create`, which means using the trigger in a replace operation as shown in the following code is not permitted.
-For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#post-triggers) articles.
+For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) articles.
### <a id="post-triggers"></a>Post-triggers
function updateMetadataCallback(err, items, responseOptions) {
One thing that is important to note is the transactional execution of triggers in Azure Cosmos DB. The post-trigger runs as part of the same transaction for the underlying item itself. An exception during the post-trigger execution will fail the whole transaction. Anything committed will be rolled back and an exception returned.
-For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#post-triggers) articles.
+For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) articles.
## <a id="udfs"></a>How to write user-defined functions
function tax(income) {
} ```
-For examples of how to register and use a user-defined function, see [How to use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#udfs) article.
+For examples of how to register and use a user-defined function, see [How to use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-work-with-user-defined-functions) article.
## Logging
cosmos-db Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/stored-procedures-triggers-udfs.md
Azure Cosmos DB provides triggers that can be invoked by performing an operation
Similar to pre-triggers, post-triggers, are also associated with an operation on an Azure Cosmos item and they don't require any input parameters. They run *after* the operation has completed and have access to the response message that is sent to the client. For examples, see [How to write triggers](how-to-write-stored-procedures-triggers-udfs.md#triggers) article. > [!NOTE]
-> Registered triggers don't run automatically when their corresponding operations (create / delete / replace / update) happen. They have to be explicitly called when executing these operations. To learn more, see [how to run triggers](how-to-use-stored-procedures-triggers-udfs.md#pre-triggers) article.
+> Registered triggers don't run automatically when their corresponding operations (create / delete / replace / update) happen. They have to be explicitly called when executing these operations. To learn more, see [how to run triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) article.
## <a id="udfs"></a>User-defined functions
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/find-request-unit-charge.md
This article presents the different ways you can find the [request unit](../requ
## Use the .NET SDK
-Currently, the only SDK that returns the RU charge for table operations is the [.NET Standard SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table). The `TableResult` object exposes a `RequestCharge` property that is populated by the SDK when you use it against the Azure Cosmos DB Table API:
+Currently, the only SDK that returns the RU charge for table operations is the legacy [Microsoft.Azure.Cosmos.Table .NET Standard SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table). The `TableResult` object exposes a `RequestCharge` property that is populated by the SDK when you use it against the Azure Cosmos DB Table API:
```csharp CloudTable tableReference = client.GetTableReference("table");
To learn about optimizing your RU consumption, see these articles:
* [Request units and throughput in Azure Cosmos DB](../request-units.md) * [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
-* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
+* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
cost-management-billing Mosp Ea Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mosp-ea-transfer.md
Title: Transfer an Azure subscription to an Enterprise Agreement
-description: This article helps you understand the steps to transfer a Microsoft Customer Agreement or MOSP subscription to an Enterprise Agreement.
+description: This article helps you understand the steps to transfer a Microsoft Customer Agreement subscription or MOSP subscription to an Enterprise Agreement.
tags: billing Previously updated : 10/12/2021 Last updated : 05/03/2022 # Transfer an Azure subscription to an Enterprise Agreement (EA)
-This article helps you understand the steps needed to transfer a Microsoft Customer Agreement or MOSP (pay-as-you-go) subscription to an EA. The transfer has no downtime, however there are many steps to follow to enable the transfer.
+This article helps you understand the steps needed to transfer an individual Microsoft Customer Agreement subscription (Azure offer MS-AZR-0017G pay-as-you-go) or a MOSP (pay-as-you-go) subscription (Azure offer MS-AZR-003P) to an EA. The transfer has no downtime, however there are many steps to follow to enable the transfer.
+
+If you want to transfer a different subscription type to EA, see [Azure subscription and reservation transfer hub](subscription-transfer.md) for supported transfer options.
> [!NOTE] > The transfer process doesn't change Azure AD Directory information that the subscriptions are linked to. If you want to make an Azure AD Directory change, see [Transfer an Azure subscription to a different Azure AD directory](../../role-based-access-control/transfer-subscription.md).
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
Previously updated : 03/02/2022 Last updated : 05/04/2022
Users with this role have the highest level of access. They can:
- View and manage all reservation orders and reservations that apply to the Enterprise Agreement. - Enterprise administrator (read-only) can view reservation orders and reservations. They can't manage them.
-You can have multiple enterprise administrators in an enterprise enrollment. You can grant read-only access to enterprise administrators. They all inherit the department administrator role.
+You can have multiple enterprise administrators in an enterprise enrollment. You can grant read-only access to enterprise administrators.
+
+The EA administrator role automatically inherits all access and privilege of the department administrator role. So thereΓÇÖs no need to manually give an EA administrator the department administrator role. Avoid giving the EA administrator the department administrator role because, as a department administrator, the EA administrator:
+
+- Won't have access to the Enrollment tab in the EA portal
+- Won't have access to the Usage Summary Page under the Reports tab
+ The enterprise administrator role can be assigned to multiple accounts.
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
Previously updated : 04/27/2022 Last updated : 05/03/2022 # Copy and transform data in Azure SQL Database by using Azure Data Factory or Azure Synapse Analytics
Settings specific to Azure SQL Database are available in the **Source Options**
:::image type="content" source="media/data-flow/isolationlevel.png" alt-text="Isolation Level":::
+**Enable incremental extract**: Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed.
+
+**Incremental date column**: When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table.
+
+**Start reading from beginning**: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on.
+ ### Sink transformation Settings specific to Azure SQL Database are available in the **Settings** tab of the sink transformation.
data-factory Data Flow External Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-external-call.md
Title: External call data transformation in mapping data flow
+ Title: External call data transformation in mapping data flows
description: Call external custom endpoints for mapping data flows Previously updated : 11/24/2021 Last updated : 05/03/2022
-# External call transformation in mapping data flow
+# External call transformation in mapping data flows
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] [!INCLUDE[data-flow-preamble](includes/data-flow-preamble.md)]
-The external call transformation enables data engineers to call out to external REST end points row-by-row in order to add custom or 3rd party results into your data flow streams.
+The external call transformation enables data engineers to call out to external REST end points row-by-row in order to add custom or third party results into your data flow streams.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWPXGN] ## Configuration
-In the external call transformation configuration panel, you will first pick the type of external endpoint you wish to connect to, then map incoming columns, and finally define an output data structure which will be consumed by downstream transformations.
+In the external call transformation configuration panel, you'll first pick the type of external endpoint you wish to connect to. Next step is to map incoming columns. Finally, define an output data structure to be consumed by downstream transformations.
:::image type="content" source="media/data-flow/external-call-001.png" alt-text="External call":::
You can choose auto-mapping to pass all input columns to the endpoint. Optionall
### Output
-Here is where you will define the data structure for the output of the external call, which will be consumed by downstream data transformations. You can define the data structure manually using ADF data flow syntax to define the column names and data types or click on "import projection" and allow ADF to detect the schema output from the external call. Here is an example schema definition structure as output from a weather REST API GET call:
+This is where you'll define the data structure for the output of the external call. You can define the structure for the body as well as choose how to store the headers and the status returned from the external call.
+
+If you choose to store the body, headers, and status, first choose a column name for each so that they can be consumed by downstream data transformations.
+
+You can define the body data structure manually using ADF data flow syntax. To define the column names and data types for the body, click on "import projection" and allow ADF to detect the schema output from the external call. Here is an example schema definition structure as output from a weather REST API GET call:
``` ({@context} as string[],
databox-online Azure Stack Edge Gpu Manage Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-shares.md
Previously updated : 02/26/2021 Last updated : 05/03/2022 # Use Azure portal to manage shares on your Azure Stack Edge Pro
Do the following steps in the Azure portal to create a share.
3. Select a **Type** for the share. The type can be **SMB** or **NFS**, with SMB being the default. SMB is the standard for Windows clients, and NFS is used for Linux clients. Depending upon whether you choose SMB or NFS shares, options presented are slightly different.
-4. Provide a **Storage account** where the share lives. A container is created in the storage account with the share name if the container already does not exist. If the container already exists, then the existing container is used.
+4. Provide a **Storage account** where the share lives. A container is created in the storage account with the share name if the container already doesn't exist. If the container already exists, then the existing container is used.
5. From the dropdown list, choose the **Storage service** from block blob, page blob, or files. The type of the service chosen depends on which format you want the data to reside in Azure. For example, in this instance, we want the data to reside as block blobs in Azure, hence we select **Block Blob**. If choosing **Page Blob**, you must ensure that your data is 512 bytes aligned. Use **Page blob** for VHDs or VHDX that are always 512 bytes aligned.
-6. This step depends on whether you are creating an SMB or an NFS share.
+6. This step depends on whether you're creating an SMB or an NFS share.
- **If creating an SMB share** - In the **All privilege local user** field, choose from **Create new** or **Use existing**. If creating a new local user, provide the **username**, **password**, and then confirm password. This assigns the permissions to the local user. After you have assigned the permissions here, you can then use File Explorer to modify these permissions. ![Add SMB share](media/azure-stack-edge-gpu-manage-shares/add-smb-share.png)
Do the following steps in the Azure portal to create a share.
![Add NFS share](media/azure-stack-edge-gpu-manage-shares/add-nfs-share.png)
-7. To easily access the shares from Edge compute modules, use the local mount point. Select **Use the share with Edge compute** so that the share is automatically mounted after it is created. When this option is selected, the Edge module can also use the compute with the local mount point.
+7. To easily access the shares from Edge compute modules, use the local mount point. Select **Use the share with Edge compute** so that the share is automatically mounted after it's created. When this option is selected, the Edge module can also use the compute with the local mount point.
-8. Click **Create** to create the share. You are notified that the share creation is in progress. After the share is created with the specified settings, the **Shares** blade updates to reflect the new share.
+8. Select **Create** to create the share. You're notified that the share creation is in progress. After the share is created with the specified settings, the **Shares** blade updates to reflect the new share.
## Add a local share
Do the following steps in the Azure portal to create a share.
3. Select a **Type** for the share. The type can be **SMB** or **NFS**, with SMB being the default. SMB is the standard for Windows clients, and NFS is used for Linux clients. Depending upon whether you choose SMB or NFS shares, options presented are slightly different. > [!IMPORTANT]
- > Make sure that the Azure Storage account that you use does not have immutability policies set on it if you are using it with a Azure Stack Edge Pro or Data Box Gateway device. For more information, see [Set and manage immutability policies for blob storage](../storage/blobs/immutable-policy-configure-version-scope.md).
+ > Make sure that the Azure Storage account that you use doesn't have immutability policies set on it if you're using it with a Azure Stack Edge Pro or Data Box Gateway device. For more information, see [Set and manage immutability policies for blob storage](../storage/blobs/immutable-policy-configure-version-scope.md).
4. To easily access the shares from Edge compute modules, use the local mount point. Select **Use the share with Edge compute** so that the Edge module can use the compute with the local mount point.
Do the following steps in the Azure portal to create a share.
## Mount a share
-If you created a share before you configured compute on your Azure Stack Edge Pro device, you will need to mount the share. Take the following steps to mount a share.
+If you created a share before you configured compute on your Azure Stack Edge Pro device, you'll need to mount the share. Take the following steps to mount a share.
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share you want to mount. The **Used for compute** column will show the status as **Disabled** for the selected share.
If you created a share before you configured compute on your Azure Stack Edge Pr
Do the following steps in the Azure portal to unmount a share.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share that you want to unmount. You want to make sure that the share you unmount is not used by any modules. If the share is used by a module, then you will see issues with the corresponding module.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share that you want to unmount. You want to make sure that the share you unmount isn't used by any modules. If the share is used by a module, then you'll see issues with the corresponding module.
![Select share 2](media/azure-stack-edge-gpu-manage-shares/unmount-share-1.png)
Do the following steps in the Azure portal to unmount a share.
## Delete a share
-Do the following steps in the Azure portal to delete a share.
+Use the following steps in the Azure portal to delete a share.
1. From the list of shares, select and click the share that you want to delete.
- ![Select share 3](media/azure-stack-edge-gpu-manage-shares/delete-share-1.png)
+ ![Screenshot of select share 3](media/azure-stack-edge-gpu-manage-shares/delete-share-1.png)
-2. Click **Delete**.
+2. Select **Delete**.
- ![Click delete](media/azure-stack-edge-gpu-manage-shares/delete-share-2.png)
+ ![Screenshot of select delete](media/azure-stack-edge-gpu-manage-shares/delete-share-2.png)
-3. When prompted for confirmation, click **Yes**.
+3. When prompted for confirmation, select **Yes**.
![Confirm delete](media/azure-stack-edge-gpu-manage-shares/delete-share-3.png)
The refresh feature allows you to refresh the contents of a share. When you refr
> [!IMPORTANT] > - You can't refresh local shares.
-> - Permissions and access control lists (ACLs) are not preserved across a refresh operation.
+> - Permissions and access control lists (ACLs) aren't preserved across a refresh operation.
Do the following steps in the Azure portal to refresh a share.
Do the following steps in the Azure portal to refresh a share.
![Select share 4](media/azure-stack-edge-gpu-manage-shares/refresh-share-1.png)
-2. Click **Refresh**.
+2. Select **Refresh**.
- ![Click refresh](media/azure-stack-edge-gpu-manage-shares/refresh-share-2.png)
+ ![Screenshot of select refresh](media/azure-stack-edge-gpu-manage-shares/refresh-share-2.png)
-3. When prompted for confirmation, click **Yes**. A job starts to refresh the contents of the on-premises share.
+3. When prompted for confirmation, select **Yes**. A job starts to refresh the contents of the on-premises share.
![Confirm refresh](media/azure-stack-edge-gpu-manage-shares/refresh-share-3.png)
-4. While the refresh is in progress, the refresh option is grayed out in the context menu. Click the job notification to view the refresh job status.
+4. While the refresh is in progress, the refresh option is grayed out in the context menu. Select the job notification to view the refresh job status.
-5. The time to refresh depends on the number of files in the Azure container as well as the files on the device. Once the refresh has successfully completed, the share timestamp is updated. Even if the refresh has partial failures, the operation is considered successful and the timestamp is updated. The refresh error logs are also updated.
+5. The time to refresh depends on the number of files in the Azure container and the files on the device. Once the refresh has successfully completed, the share timestamp is updated. Even if the refresh has partial failures, the operation is considered successful and the timestamp is updated. The refresh error logs are also updated.
![Updated timestamp](media/azure-stack-edge-gpu-manage-shares/refresh-share-4.png)
-If there is a failure, an alert is raised. The alert details the cause and the recommendation to fix the issue. The alert also links to a file that has the complete summary of the failures including the files that failed to update or delete.
+If there's a failure, an alert is raised. The alert details the cause and the recommendation to fix the issue. The alert also links to a file that has the complete summary of the failures including the files that failed to update or delete.
## Sync pinned files
To automatically sync up pinned files, do the following steps in the Azure porta
![Automated sync for pinned files 3](media/azure-stack-edge-gpu-manage-shares/image-3.png)
-5. From the Azure portal, browse to the container which you created. Upload the file which you want to be pinned into the newcontainer which has the metadata set to pinned.
+5. From the Azure portal, browse to the container that you created. Upload the file that you want to be pinned into the new container, that has the metadata set to pinned.
6. Select **Refresh data** in Azure portal for the device to download the pinning policy for that particular Azure Storage container.
If your storage account keys have been rotated, then you need to sync the storag
Do the following steps in the Azure portal to sync your storage access key.
-1. Go to **Overview** in your resource. From the list of shares, choose and click a share associated with the storage account that you need to sync.
+1. Go to **Overview** in your resource. From the list of shares, select a share associated with the storage account that you need to sync.
![Select share with relevant storage account](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-1.png)
-2. Click **Sync storage key**. Click **Yes** when prompted for confirmation.
+2. Select **Sync storage key**. Select **Yes** when prompted for confirmation.
![Select Sync storage key](media/azure-stack-edge-gpu-manage-shares/sync-storage-key-2.png)
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can order any of the following preconfigured appliances for monitoring your
|Hardware profile |Appliance |Performance / Monitoring |Physical specifications | |||||
-|Corporate | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
-|Enterprise | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-|SMB | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
-|SMB | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 60Mbp/s<br>**Max devices**: 1,000 | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
-|Office | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
+|Corporate | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
+|Enterprise | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|SMB | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
+|SMB | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 60Mbp/s<br>**Max devices**: 1,000 <br> 8 Cores/32G RAM/100GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
+|Office | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
> [!NOTE]
defender-for-iot Ot Virtual Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-virtual-appliances.md
For all deployments, bandwidth results for virtual machines may vary, depending
|**Enterprise** | **Max bandwidth**: 800 Mb/sec <br>**Max monitored assets**: 10,000 | **vCPU**: 8 <br>**Memory**: 32 GB <br>**Storage**: 1.8 TB (300 IOPS) | |**SMB** | **Max bandwidth**: 160 Mb/sec <br>**Max monitored assets**: 1,000 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 500 GB (150 IOPS) | |**Office** | **Max bandwidth**: 100 Mb/sec <br>**Max monitored assets**: 800 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 100 GB (150 IOPS) |
-|**Rugged** | **Max bandwidth**: 10 Mb/sec <br>**Max monitored assets**: 100 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 60 GB (150 IOPS) |
+|**Rugged** | **Max bandwidth**: 10 Mb/sec <br>**Max monitored assets**: 100 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 64 GB (150 IOPS) |
## On-premises management console VM requirements
event-grid Communication Services Voice Video Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md
This section contains an example of what that data would look like for each even
] ``` + ### Microsoft.Communication.CallStarted ```json
event-hubs Schema Registry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-overview.md
Title: Azure Schema Registry in Azure Event Hubs description: This article provides an overview of Schema Registry support by Azure Event Hubs. Previously updated : 01/13/2022 Last updated : 05/04/2022
You can use one of the following libraries to include an Avro serializer, which
- [Python - azure-schemaregistry-avroserializer](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/schemaregistry/azure-schemaregistry-avroencoder/) - [JavaScript - @azure/schema-registry-avro](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/schemaregistry/schema-registry-avro) - [Apache Kafka](https://github.com/Azure/azure-schema-registry-for-kafka/) - Run Kafka-integrated Apache Avro serializers and deserializers backed by Azure Schema Registry. The Java client's Apache Kafka client serializer for the Azure Schema Registry can be used in any Apache Kafka scenario and with any Apache Kafka® based deployment or cloud service.
+- **Azure CLI** - For an example of adding a schema to a schema group using CLI, see [Adding a schema to a schema group using CLI](https://github.com/Azure/azure-event-hubs/tree/master/samples/Management/CLI/AddschematoSchemaGroups).
+- **PowerShell** - For an example of adding a schema to a schema group using PowerShell, see [Adding a schema to a schema group using PowerShell](https://github.com/Azure/azure-event-hubs/tree/master/samples/Management/PowerShell/AddingSchematoSchemagroups).
+ ## Limits For limits (for example: number of schema groups in a namespace) of Event Hubs, see [Event Hubs quotas and limits](event-hubs-quotas.md)
event-hubs Transport Layer Security Audit Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-audit-minimum-version.md
- Title: Use Azure Policy to audit for compliance of minimum TLS version for an Azure Event Hubs namespace-
-description: Configure Azure Policy to audit compliance of Azure Event Hubs for using a minimum version of Transport Layer Security (TLS).
----- Previously updated : 04/25/2022---
-# Use Azure Policy to audit for compliance of minimum TLS version for an Azure Event Hubs namespace (Preview)
-
-If you have a large number of Microsoft Azure Event Hubs namespaces, you may want to perform an audit to make sure that all namespaces are configured for the minimum version of TLS that your organization requires. To audit a set of Event Hubs namespaces for their compliance, use Azure Policy. Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep those resources compliant with your corporate standards and service level agreements. For more information, see [Overview of Azure Policy](../governance/policy/overview.md).
-
-## Create a policy with an audit effect
-
-Azure Policy supports effects that determine what happens when a policy rule is evaluated against a resource. The audit effect creates a warning when a resource is not in compliance, but does not stop the request. For more information about effects, see [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
-
-To create a policy with an audit effect for the minimum TLS version with the Azure portal, follow these steps:
-
-1. In the Azure portal, navigate to the Azure Policy service.
-2. Under the **Authoring** section, select **Definitions**.
-3. Select **Add policy definition** to create a new policy definition.
-4. For the **Definition location** field, select the **More** button to specify where the audit policy resource is located.
-5. Specify a name for the policy. You can optionally specify a description and category.
-6. Under **Policy rule** , add the following policy definition to the **policyRule** section.
-
- ```json
- {
- "policyRule": {
- "if": {
- "allOf": [
- {
- "field": "type",
- "equals": "Microsoft.EventHub/namespaces"
- },
- {
- "not": {
- "field": " Microsoft.EventHub/namespaces/minimumTlsVersion",
- "equals": "1.2"
- }
- }
- ]
- },
- "then": {
- "effect": "audit"
- }
- }
- }
- ```
-
-7. Save the policy.
-
-### Assign the policy
-
-Next, assign the policy to a resource. The scope of the policy corresponds to that resource and any resources beneath it. For more information on policy assignment, see [Azure Policy assignment structure](../governance/policy/concepts/assignment-structure.md).
-
-To assign the policy with the Azure portal, follow these steps:
-
-1. In the Azure portal, navigate to the Azure Policy service.
-2. Under the **Authoring** section, select **Assignments**.
-3. Select **Assign policy** to create a new policy assignment.
-4. For the **Scope** field, select the scope of the policy assignment.
-5. For the **Policy definition** field, select the **More** button, then select the policy you defined in the previous section from the list.
-6. Provide a name for the policy assignment. The description is optional.
-7. Leave **Policy enforcement** set to _Enabled_. This setting has no effect on the audit policy.
-8. Select **Review + create** to create the assignment.
-
-### View compliance report
-
-After you have assigned the policy, you can view the compliance report. The compliance report for an audit policy provides information on which Event Hubs namespaces are not in compliance with the policy. For more information, see [Get policy compliance data](../governance/policy/how-to/get-compliance-data.md).
-
-It may take several minutes for the compliance report to become available after the policy assignment is created.
-
-To view the compliance report in the Azure portal, follow these steps:
-
-1. In the Azure portal, navigate to the Azure Policy service.
-2. Select **Compliance**.
-3. Filter the results for the name of the policy assignment that you created in the previous step. The report shows how many resources are not in compliance with the policy.
-4. You can drill down into the report for additional details, including a list of Event Hubs namespaces that are not in compliance.
-
-## Use Azure Policy to enforce the minimum TLS version
-
-Azure Policy supports cloud governance by ensuring that Azure resources adhere to requirements and standards. To enforce a minimum TLS version requirement for the Event Hubs namespaces in your organization, you can create a policy that prevents the creation of a new Event Hubs namespace that sets the minimum TLS requirement to an older version of TLS than that which is dictated by the policy. This policy will also prevent all configuration changes to an existing namespace if the minimum TLS version setting for that namespace is not compliant with the policy.
-
-The enforcement policy uses the deny effect to prevent a request that would create or modify an Event Hubs namespace so that the minimum TLS version no longer adheres to your organization's standards. For more information about effects, see [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
-
-To create a policy with a deny effect for a minimum TLS version that is less than TLS 1.2, provide the following JSON in the **policyRule** section of the policy definition:
-
-```json
-{
- "policyRule": {
- "if": {
- "allOf": [
- {
- "field": "type",
- "equals": " Microsoft.EventHub/namespaces"
- },
- {
- "not": {
- "field": " Microsoft.EventHub/namespaces/minimumTlsVersion",
- "equals": "1.2"
- }
- }
- ]
- },
- "then": {
- "effect": "deny"
- }
- }
-}
-```
-
-After you create the policy with the deny effect and assign it to a scope, a user cannot create an Event Hubs namespace with a minimum TLS version that is older than 1.2. Nor can a user make any configuration changes to an existing Event Hubs namespace that currently requires a minimum TLS version that is older than 1.2. Attempting to do so results in an error. The required minimum TLS version for the Event Hubs namespace must be set to 1.2 to proceed with namespace creation or configuration.
-
-An error will be shown if you try to create an Event Hubs namespace with the minimum TLS version set to TLS 1.0 when a policy with a deny effect requires that the minimum TLS version be set to TLS 1.2.
-
-## Next steps
-
-See the following documentation for more information.
--- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md)-- [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)-- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)
event-hubs Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-client-version.md
- Title: Configure Transport Layer Security (TLS) for an Event Hubs client application-
-description: Configure a client application to communicate with Azure Event Hubs using a minimum version of Transport Layer Security (TLS).
----- Previously updated : 04/25/2022---
-# Configure Transport Layer Security (TLS) for an Event Hubs client application (Preview)
-
-For security purposes, an Azure Event Hubs namespace may require that clients use a minimum version of Transport Layer Security (TLS) to send requests. Calls to Azure Event Hubs will fail if the client is using a version of TLS that is lower than the minimum required version. For example, if a namespace requires TLS 1.2, then a request sent by a client who is using TLS 1.1 will fail.
-
-This article describes how to configure a client application to use a particular version of TLS. For information about how to configure a minimum required version of TLS for an Azure Event Hubs namespace, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-configure-minimum-version.md).
-
-## Configure the client TLS version
-
-In order for a client to send a request with a particular version of TLS, the operating system must support that version.
-
-The following example shows how to set the client's TLS version to 1.2 from .NET. The .NET Framework used by the client must support TLS 1.2. For more information, see [Support for TLS 1.2](/dotnet/framework/network-programming/tls#support-for-tls-12).
-
-# [.NET](#tab/dotnet)
-
-The following sample shows how to enable TLS 1.2 in a .NET client using the Azure.Messaging.ServiceBus client library of Event Hubs:
-
-```csharp
-{
- // Enable TLS 1.2 before connecting to Event Hubs
- System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
-
- // Connection string to your Event Hubs namespace
- string connectionString = "<NAMESPACE CONNECTION STRING>";
-
- // Name of your Event Hub
- string eventHubName = "<EVENT HUB NAME>";
-
- // The sender used to publish messages to the queue
- var producer = new EventHubProducerClient(connectionString, eventHubName);
-
- // Use the producer client to send a message to the Event Hubs queue
- using EventDataBatch eventBatch = await producer.CreateBatchAsync();
- var eventData = new EventData("This is an event body");
-
- if (!eventBatch.TryAdd(eventData))
- {
- throw new Exception($"The event could not be added.");
- }
-}
-```
---
-## Verify the TLS version used by a client
-
-To verify that the specified version of TLS was used by the client to send a request, you can use [Fiddler](https://www.telerik.com/fiddler) or a similar tool. Open Fiddler to start capturing client network traffic, then execute one of the examples in the previous section. Look at the Fiddler trace to confirm that the correct version of TLS was used to send the request.
-
-## Next steps
-
-See the following documentation for more information.
--- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md)-- [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)-- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
event-hubs Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-minimum-version.md
- Title: Configure the minimum TLS version for an Event Hubs namespace using ARM-
-description: Configure an Azure Event Hubs namespace to use a minimum version of Transport Layer Security (TLS).
----- Previously updated : 04/25/2022---
-# Configure the minimum TLS version for an Event Hubs namespace using ARM (Preview)
-
-To configure the minimum TLS version for an Event Hubs namespace, set the `MinimumTlsVersion` version property. When you create an Event Hubs namespace with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
-
-> [!NOTE]
-> Namespaces created using an api-version prior to 2022-01-01-preview will have 1.0 as the value for `MinimumTlsVersion`. This behavior was the prior default, and is still there for backwards compatibility.
-
-## Create a template to configure the minimum TLS version
-
-To configure the minimum TLS version for an Event Hubs namespace with a template, create a template with the `MinimumTlsVersion` property set to 1.0, 1.1, or 1.2. The following steps describe how to create a template in the Azure portal.
-
-1. In the Azure portal, choose **Create a resource**.
-2. In **Search the Marketplace** , type **custom deployment** , and then press **ENTER**.
-3. Choose **Custom deployment (deploy using custom templates) (preview)**, choose **Create** , and then choose **Build your own template in the editor**.
-4. In the template editor, paste in the following JSON to create a new namespace and set the minimum TLS version to TLS 1.2. Remember to replace the placeholders in angle brackets with your own values.
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {},
- "variables": {
- "eventHubNamespaceName": "[concat(uniqueString(subscription().subscriptionId), 'tls')]"
- },
- "resources": [
- {
- "name": "[variables('eventHubNamespaceName')]",
- "type": "Microsoft.EventHub/namespaces",
- "apiVersion": "2022-01-01-preview",
- "location": "westeurope",
- "properties": {
- "minimumTlsVersion": "1.2"
- },
- "dependsOn": [],
- "tags": {}
- }
- ]
- }
- ```
-
-5. Save the template.
-6. Specify resource group parameter, then choose the **Review + create** button to deploy the template and create a namespace with the `MinimumTlsVersion` property configured.
-
-> [!NOTE]
-> After you update the minimum TLS version for the Event Hubs namespace, it may take up to 30 seconds before the change is fully propagated.
-
-Configuring the minimum TLS version requires api-version 2022-01-01-preview or later of the Azure Event Hubs resource provider.
-
-## Check the minimum required TLS version for multiple namespaces
-
-To check the minimum required TLS version across a set of Event Hubs namespaces with optimal performance, you can use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph Explorer, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md).
-
-Running the following query in the Resource Graph Explorer returns a list of Event Hubs namespaces and displays the minimum TLS version for each namespace:
-
-```kusto
-resources
-| where type =~ 'Microsoft.EventHub/namespaces'
-| extend minimumTlsVersion = parse\_json(properties).minimumTlsVersion
-| project subscriptionId, resourceGroup, name, minimumTlsVersion
-```
-
-## Test the minimum TLS version from a client
-
-To test that the minimum required TLS version for an Event Hubs namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
-
-When a client accesses an Event Hubs namespace using a TLS version that does not meet the minimum TLS version configured for the namespace, Azure Event Hubs returns error code 401 (Unauthorized) and a message indicating that the TLS version that was used is not permitted for making requests against this Event Hubs namespace.
-
-> [!NOTE]
-> Due to limitations in the confluent library, errors coming from an invalid TLS version will not surface when connecting through the Kafka protocol. Instead a general exception will be shown.
-
-> [!NOTE]
-> When you configure a minimum TLS version for an Event Hubs namespace, that minimum version is enforced at the application layer. Tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the minimum required version when run directly against the Event Hubs namespace endpoint.
-
-## Next steps
-
-See the following documentation for more information.
--- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md)-- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)-- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
event-hubs Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-enforce-minimum-version.md
- Title: Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace-
-description: Configure a service bus namespace to require a minimum version of Transport Layer Security (TLS) for clients making requests against Azure Event Hubs.
----- Previously updated : 04/25/2022---
-# Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace (Preview)
-
-Communication between a client application and an Azure Event Hubs namespace is encrypted using Transport Layer Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients and services over the Internet. For more information about TLS, see [Transport Layer Security](https://datatracker.ietf.org/wg/tls/about/).
-
-Azure Event Hubs supports choosing a specific TLS version for namespaces. Currently Azure Event Hubs uses TLS 1.2 on public endpoints by default, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
-
-Azure Event Hubs namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Event Hubs namespace to require that clients send and receive data with a newer version of TLS. If an Event Hubs namespace requires a minimum version of TLS, then any requests made with an older version will fail.
-
-> [!IMPORTANT]
-> If you are using a service that connects to Azure Event Hubs, make sure that that service is using the appropriate version of TLS to send requests to Azure Event Hubs before you set the required minimum version for an Event Hubs namespace.
-
-## Permissions necessary to require a minimum version of TLS
-
-To set the `MinimumTlsVersion` property for the Event Hubs namespace, a user must have permissions to create and manage Event Hubs namespaces. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.EventHub/namespaces/write** or **Microsoft.EventHub/namespaces/\*** action. Built-in roles with this action include:
--- The Azure Resource Manager [Owner](../role-based-access-control/built-in-roles.md#owner) role-- The Azure Resource Manager [Contributor](../role-based-access-control/built-in-roles.md#contributor) role-- The [Azure Event Hubs Data Owner](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner) role-
-Role assignments must be scoped to the level of the Event Hubs namespace or higher to permit a user to require a minimum version of TLS for the Event Hubs namespace. For more information about role scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md).
-
-Be careful to restrict assignment of these roles only to those who require the ability to create an Event Hubs namespace or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../role-based-access-control/best-practices.md).
-
-> [!NOTE]
-> The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [**Owner**](../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create and manage Event Hubs namespaces. For more information, see [**Classic subscription administrator roles, Azure roles, and Azure AD administrator roles**](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
-
-## Network considerations
-
-When a client sends a request to Event Hubs namespace, the client establishes a connection with the public endpoint of the Event Hubs namespace first, before processing any requests. The minimum TLS version setting is checked after the connection is established. If the request uses an earlier version of TLS than that specified by the setting, the connection will continue to succeed, but the request will eventually fail.
-
-> [!NOTE]
-> Due to limitations in the confluent library, errors coming from an invalid TLS version will not surface when connecting through the Kafka protocol. Instead a general exception will be shown.
-
-## Next steps
-
-See the following documentation for more information.
--- [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)-- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)-- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
frontdoor Front Door Url Redirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-redirect.md
In Azure Front Door Standard/Premium tier, you can configure URL redirect using
:::image type="content" source="./media/front-door-url-redirect/front-door-url-redirect-rule-set.png" alt-text="Screenshot of creating url redirect with Rule Set." lightbox="./media/front-door-url-redirect/front-door-url-redirect-expanded.png":::
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ::: zone-end ::: zone pivot="front-door-classic"
hpc-cache Hpc Cache Ingest Msrsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-ingest-msrsync.md
Title: Azure HPC Cache data ingest - msrsync description: How to use msrsync to move data to a Blob storage target in Azure HPC Cache-+ Last updated 10/30/2019-+ # Azure HPC Cache data ingest - msrsync method
Follow these instructions to use ``msrsync`` to populate Azure Blob storage with
1. Install ``msrsync`` and its prerequisites (``rsync`` and Python 2.6 or later) 1. Determine the total number of files and directories to be copied.
- For example, use the utility ``prime.py`` with arguments ```prime.py --directory /path/to/some/directory``` (available by downloading <https://github.com/Azure/Avere/blob/master/src/clientapps/dataingestor/prime.py>).
+ For example, use the utility ``prime.py`` with arguments ```prime.py --directory /path/to/some/directory``` (available by downloading <https://github.com/Azure/Avere/blob/main/src/clientapps/dataingestor/prime.py>).
If not using ``prime.py``, you can calculate the number of items with the GNU ``find`` tool as follows:
hpc-cache Hpc Cache Ingest Parallelcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-ingest-parallelcp.md
Title: Azure HPC Cache data ingest - parallel copy script description: How to use a parallel copy script to move data to a Blob storage target in Azure HPC Cache-+ Last updated 10/30/2019-+ # Azure HPC Cache data ingest - parallel copy script method
hpc-cache Hpc Cache Ingest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-ingest.md
Title: Move data to an Azure HPC Cache cloud container description: How to populate Azure Blob storage for use with Azure HPC Cache-+ Previously updated : 06/30/2021- Last updated : 05/02/2022+ # Move data to Azure Blob storage
-If your workflow includes moving data to Azure Blob storage, make sure you are using an efficient strategy. You can either pre-load data in a new blob container before defining it as a storage target, or add the container and then copy your data using Azure HPC Cache.
+If your workflow includes moving data to Azure Blob storage, make sure you are using an efficient strategy. You should create the cache, add the blob container as a storage target, and then copy your data using Azure HPC Cache.
This article explains the best ways to move data to blob storage for use with Azure HPC Cache. > [!TIP] >
-> This article does not apply to NFS-mounted blob storage (ADLS-NFS storage targets). You can use any NFS-based method to populate an ADLS-NFS blob container before adding it to the HPC Cache. Read [Pre-load data with NFS protocol](nfs-blob-considerations.md#pre-load-data-with-nfs-protocol) to learn more.
+> This article does not apply to NFS-mounted blob storage (ADLS-NFS storage targets). You can use any NFS-based method to populate an ADLS-NFS blob container before or after adding it to the HPC Cache. Read [Pre-load data with NFS protocol](nfs-blob-considerations.md#pre-load-data-with-nfs-protocol) to learn more.
Keep these facts in mind:
Keep these facts in mind:
* Copying data through the Azure HPC Cache to a back-end storage target is more efficient when you use multiple clients and parallel operations. A simple copy command from one client will move data slowly.
-A Python-based utility is available to load content into a blob storage container. Read [Pre-load data in blob storage](#pre-load-data-in-blob-storage-with-clfsload) to learn more.
-
-If you don't want to use the loading utility, or if you want to add content to an existing storage target, follow the parallel data ingest tips in [Copy data through the Azure HPC Cache](#copy-data-through-the-azure-hpc-cache).
-
-## Pre-load data in blob storage with CLFSLoad
-
-You can use the Avere CLFSLoad utility to copy data to a new blob storage container before you add it as a storage target. This utility runs on a single Linux system and writes data in the proprietary format needed for Azure HPC Cache. CLFSLoad is the most efficient way to populate a blob storage container for use with the cache.
-
-The Avere CLFSLoad utility is available by request from your Azure HPC Cache team. Ask your team contact for it, or open a [support ticket](hpc-cache-support-ticket.md) to request assistance.
-
-This option works with new, empty containers only. Create the container before using Avere CLFSLoad.
-
-Detailed information is included in the Avere CLFSLoad distribution, which is available on request from the Azure HPC Cache team.
-
-A general overview of the process:
-
-1. Prepare a Linux system (VM or physical) with Python version 3.6 or later. Python 3.7 is recommended for better performance.
-1. Install the Avere-CLFSLoad software on the Linux system.
-1. Execute the transfer from the Linux command line.
-
-The Avere CLFSLoad utility needs the following information:
-
-* The storage account ID that contains your blob storage container
-* The name of the empty blob storage container
-* A shared access signature (SAS) token that allows the utility to write to the container
-* A local path to the data source - either a local directory that contains the data to copy, or a local path to a mounted remote system with the data
+The strategies outlined in this article work for populating an empty blob container or for adding files to a previously used storage target.
## Copy data through the Azure HPC Cache
-If you don't want to use the Avere CLFSLoad utility, or if you want to add a large amount of data to an existing blob storage target, you can copy it through the cache. Azure HPC Cache is designed to serve multiple clients simultaneously, so to copy data through the cache, you should use parallel writes from multiple clients.
+Azure HPC Cache is designed to serve multiple clients simultaneously, so to copy data through the cache, you should use parallel writes from multiple clients.
![Diagram showing multi-client, multi-threaded data movement: At the top left, an icon for on-premises hardware storage has multiple arrows coming from it. The arrows point to four client machines. From each client machine three arrows point toward the Azure HPC Cache. From the Azure HPC Cache, multiple arrows point to blob storage.](media/hpc-cache-parallel-ingest.png)
This section explains strategies for creating a multi-client, multi-threaded fil
It also explains some utilities that can help. The ``msrsync`` utility can be used to partially automate the process of dividing a dataset into buckets and using rsync commands. The ``parallelcp`` script is another utility that reads the source directory and issues copy commands automatically.
-### Strategic planning
+## Strategic planning
When building a strategy to copy data in parallel, you should understand the tradeoffs in file size, file count, and directory depth.
hpc-cache Hpc Cache Namespace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-namespace.md
Title: Understand the Azure HPC Cache aggregated namespace description: How to plan the virtual namespace for your Azure HPC Cache-+ Previously updated : 09/30/2020- Last updated : 05/02/2022+ # Plan the aggregated namespace
The datacenter storage system exposes these exports:
* */goldline* * */goldline/templates*
-The data to be analyzed has been copied to an Azure Blob storage container named "sourcecollection" by using the [CLFSLoad utility](hpc-cache-ingest.md#pre-load-data-in-blob-storage-with-clfsload).
+The data to be analyzed has been copied to an Azure Blob storage container named "sourcecollection" by using the NFS data import techniques outlined in [Move data to Azure Blob storage](hpc-cache-ingest.md).
To allow easy access through the cache, consider creating storage targets with these virtual namespace paths:
hpc-cache Hpc Cache Security Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-security-info.md
+
+ Title: Azure HPC Cache Security Information
+description: Security information for Azure HPC Cache
+++ Last updated : 04/06/2022+++
+# Security information for Azure HPC Cache
+
+This security information applies to Microsoft Azure HPC Cache. It addresses common security questions about the configuration and operation of Azure HPC Cache.
+
+## Access to the HPC Cache service
+
+The HPC Cache Service is only accessible through your private virtual network. Microsoft cannot access your virtual network.
+
+Learn more about [connecting private networks](/security/benchmark/azure/baselines/hpc-cache-security-baseline.md).
+
+## Network infrastructure requirements
+
+Your network needs a dedicated subnet for the Azure HPC Cache, DNS support so the cache can access storage, and access from the subnet to additional Microsoft Azure infrastructure services like NTP servers and the Azure Queue Storage service.
+
+Learn more about [network infrastructure requirements](hpc-cache-prerequisites.md#network-infrastructure).
+
+## Access to NFS storage
+
+The Azure HPC Cache needs specific NFS configurations like outbound NFS port access to on-premises storage.
+
+Learn more about [configuring your NFS storage](hpc-cache-prerequisites.md#nfs-storage-requirements) to work with Azure HPC Cache.
+
+## Encryption
+
+HPC Cache data is encrypted at rest. Encryption keys may be Azure-managed or customer-managed.
+
+Learn more about [implementing customer-managed keys](customer-keys.md).
+
+HPC Cache only supports AUTH_SYS security for NFSv3 so itΓÇÖs not possible to encrypt NFS traffic between clients and the cache. If, however, data is traveling over ExpressRoute, you could [tunnel traffic with IPSEC](../virtual-wan/vpn-over-expressroute.md) for in-transit traffic encryption.
+
+## Access policies based on IP address
+
+You can set CIDR blocks to allow the following access control policies: none, read, read/write, and squashed.
+
+Learn more how to [configure access policies](access-policies.md) based on IP addresses.
+
+You can also optionally configure network security groups (NSGs) to control inbound access to the HPC Cache subnet. This restricts which IP addresses are routed to the HPC Cache subnet.
+
+## Next steps
+
+* Review [Azure HPC Cache security baseline](/security/benchmark/azure/baselines/hpc-cache-security-baseline.md).
hpc-cache Move Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/move-resource.md
Follow these basic steps to decommission and re-create the HPC Cache in a differ
Refer to [Move an Azure Storage account to another region](../storage/common/storage-account-move.md) for help.
- Keep these tips in mind:
-
- * If you use [AzCopy](../storage/common/storage-use-azcopy-v10.md), you must use AzCopy V10 or later; earlier versions are unsupported for some types of HPC Cache storage.
- * If you move an NFS-enabled blob container (ADLS-NFS storage target), be aware of the risk of mixing blob-style writes with NFS writes. Read more about this in [Use NFS-mounted blob storage with Azure HPC Cache](nfs-blob-considerations.md#pre-load-data-with-nfs-protocol).
+ > [!NOTE]
+ >
+ > If you move an NFS-enabled blob container (ADLS-NFS storage target), be aware of the risk of mixing blob-style writes with NFS writes. Read more about this in [Use NFS-mounted blob storage with Azure HPC Cache](nfs-blob-considerations.md#pre-load-data-with-nfs-protocol).
1. Create a new cache in your target region using a convenient method. Read [Template deployment](../azure-resource-manager/templates/overview.md#template-deployment-process) to learn how to use your saved template. Read [Create an HPC Cache](hpc-cache-create.md) to learn about other methods. 1. Wait until the cache has been created and appears in your subscription's **Resources** list with a status of **Healthy**.
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
Azure Private Link Service enables you to access Azure Key Vault and Azure hoste
- Despite known vulnerabilities in TLS protocol, there is no known attack that would allow a malicious agent to extract any information from your key vault when the attacker initiates a connection with a TLS version that has vulnerabilities. The attacker would still need to authenticate and authorize itself, and as long as legitimate clients always connect with recent TLS versions, there is no way that credentials could have been leaked from vulnerabilities at old TLS versions. > [!NOTE]
-> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework. To meet with compliance obligations and to improve security posture, Key Vault will disallow connections via TLS 1.0 & 1.1 starting on 31st May 2022.
+> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 will be deprecated starting on 31st May 2022 and disallowed later in the future.
## Key Vault authentication options
lab-services How To Manage Vm Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-vm-pool.md
VMs can be in one of a few states.
- **Stopping**. VM is stopping and not available for use. > [!WARNING]
-> Turning on a student VM will not affect the quota for the student. Make sure to stop all VMs manually or using a [schedule](how-to-create-schedules.md) to avoid unexpected costs.
+> Turning on a student VM will not affect the quota for the student. Make sure to stop all VMs manually or use a [schedule](how-to-create-schedules.md) to avoid unexpected costs.
## Manually starting VMs
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-diagnostics.md
The various load balancer configurations provide the following metrics:
>When using distributing traffic from an internal load balancer through an NVA or firewall syn packet, byte count, and packet count metrics are not be available and will show as zero. > >Max and min aggregations are not available for the SYN count, packet count, SNAT connection count, and byte count metrics.
+ >Count aggregation is not recommended for Data path availability and health probe status. Use average instead for best represented health data.
### View your load balancer metrics in the Azure portal
logic-apps Quickstart Logic Apps Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-cli.md
Title: Quickstart - Create and manage workflows with Azure CLI in multi-tenant Azure Logic Apps
-description: Using the CLI, create and manage logic app workflows in multi-tenant Azure Logic Apps.
+ Title: Quickstart - Create and manage workflows with Azure CLI
+description: Using the CLI, create and manage logic app workflows in Azure Logic Apps.
ms.suite: integration-+ Previously updated : 05/25/2021 Last updated : 05/03/2022
-# Quickstart: Create and manage workflows using Azure CLI in multi-tenant Azure Logic Apps
+# Quickstart: Create and manage workflows with Azure CLI in Azure Logic Apps
-This quickstart shows you how to create and manage logic apps by using the [Azure CLI Logic Apps extension](/cli/azure/logic) (`az logic`). From the command line, you can create a logic app by using the JSON file for a logic app workflow definition. You can then manage your logic app by running operations such as `list`, `show` (`get`), `update`, and `delete` from the command line.
+This quickstart shows how to create and manage automated workflows that run in Azure Logic Apps by using the [Azure CLI Logic Apps extension](/cli/azure/logic) (`az logic`). From the command line, you can create a [Consumption logic app](logic-apps-overview.md#resource-environment-differences) in multi-tenant Azure Logic Apps by using the JSON file for a logic app workflow definition. You can then manage your logic app by running operations such as `list`, `show` (`get`), `update`, and `delete` from the command line.
> [!WARNING] > The Azure CLI Logic Apps extension is currently *experimental* and *not covered by customer support*. Use this CLI extension with caution, especially if you choose to use the extension in production environments.
-If you're new to Logic Apps, you can also learn how to create your first logic apps [through the Azure portal](quickstart-create-first-logic-app-workflow.md), [in Visual Studio](quickstart-create-logic-apps-with-visual-studio.md), and [in Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md).
+This quickstart currently applies only to Consumption logic app workflows that run in multi-tenant Azure Logic Apps. Azure CLI is currently unavailable for Standard logic app workflows that run in single-tenant Azure Logic Apps. For more information, review [Resource type and host differences in Azure Logic Apps](logic-apps-overview.md#resource-environment-differences).
+
+If you're new to Azure Logic Apps, learn how to create your first Consumption logic app workflow [through the Azure portal](quickstart-create-first-logic-app-workflow.md), [in Visual Studio](quickstart-create-logic-apps-with-visual-studio.md), and [in Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md).
## Prerequisites * An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).+ * The [Azure CLI](/cli/azure/install-azure-cli) installed on your local computer.
-* The [Logic Apps Azure CLI extension](/cli/azure/azure-cli-extensions-list) installed on your computer. To install this extension, use this command: `az extension add --name logic`
+
+* The [Azure Logic Apps CLI extension](/cli/azure/azure-cli-extensions-list) installed on your computer. To install this extension, use this command: `az extension add --name logic`
+ * An [Azure resource group](#examplecreate-resource-group) in which to create your logic app.
-### Prerequisite check
+### Prerequisites check
-Validate your environment before you begin:
+Before you start, validate your environment:
* Sign in to the Azure portal and check that your subscription is active by running `az login`.+ * Check your version of the Azure CLI in a terminal or command window by running `az --version`. For the latest version, see the [latest release notes](/cli/azure/release-notes-azure-cli?tabs=azure-cli).
- * If you don't have the latest version, update your installation by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
-### Example - create resource group
+ If you don't have the latest version, update your installation by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
+
+### Example - Create resource group
If you don't already have a resource group for your logic app, create the group with the command `az group create`. For example, the following command creates a resource group named `testResourceGroup` in the location `westus`.
When you run the commands to create or update your logic app, your workflow defi
## Create logic apps from CLI
-You can create a logic app workflow from the Azure CLI using the command [`az logic workflow create`](/cli/azure/logic/workflow#az-logic-workflow-create) with a JSON file for the definition.
+To create a logic app workflow from the Azure CLI, use the command [`az logic workflow create`](/cli/azure/logic/workflow#az-logic-workflow-create) with a JSON file for the definition.
```azurecli az logic workflow create --definition
Your command must include the following [required parameters](/cli/azure/logic/w
You can also include additional [optional parameters](/cli/azure/logic/workflow#az-logic-workflow-create-optional-parameters) to configure your logic app's access controls, endpoints, integration account, integration service environment, state, and resource tags.
-### Example - create logic app
+### Example - Create logic app
In this example, a workflow named `testLogicApp` is created in the resource group `testResourceGroup` in the location `westus`. The JSON file `testDefinition.json` contains the workflow definition.
When your workflow is successfully created, the CLI shows your new workflow defi
## Update logic apps from CLI
-You can also update a logic app's workflow from the Azure CLI using the command [`az logic workflow create`](/cli/azure/logic/workflow#az-logic-workflow-create).
+To update a logic app's workflow from the Azure CLI, use the command [`az logic workflow create`](/cli/azure/logic/workflow#az-logic-workflow-create).
Your command must include the same [required parameters](/cli/azure/logic/workflow#az-logic-workflow-create-required-parameters) as when you [create a logic app](#create-logic-apps-from-cli). You can also add the same [optional parameters](/cli/azure/logic/workflow#az-logic-workflow-create-optional-parameters) as when creating a logic app.
az logic workflow create --definition
[--tags] ```
-### Example - update logic app
+### Example - Update logic app
In this example, the [sample workflow created in the previous section](#examplecreate-logic-app) is updated to use a different JSON definition file, `newTestDefinition.json`, and add two resource tags, `testTag1` and `testTag2` with description values.
When your workflow is successfully updated, the CLI shows your logic app's updat
## Delete logic apps from CLI
-You can delete a logic app's workflow from the Azure CLI using the command [`az logic workflow delete`](/cli/azure/logic/workflow#az-logic-workflow-delete).
+To delete a logic app's workflow from the Azure CLI, use the command [`az logic workflow delete`](/cli/azure/logic/workflow#az-logic-workflow-delete).
Your command must include the following [required parameters](/cli/azure/logic/workflow#az-logic-workflow-delete-required-parameters):
The CLI then prompts you to confirm the deletion of your logic app. You can skip
Are you sure you want to perform this operation? (y/n): ```
-You can confirm a logic app's deletion by [listing your logic apps in the CLI](#list-logic-apps-in-cli), or by viewing your logic apps in the Azure portal.
+To confirm a logic app's deletion, [list your logic apps in the CLI](#list-logic-apps-in-cli), or view your logic apps in the Azure portal.
-### Example - delete logic app
+### Example - Delete logic app
In this example, the [sample workflow created in a previous section](#examplecreate-logic-app) is deleted.
az logic workflow delete --resource-group "testResourceGroup" --name "testLogicA
After you respond to the confirmation prompt with `y`, the logic app is deleted.
-### Considerations - delete logic app
+### Considerations - Delete logic app
Deleting a logic app affects workflow instances in the following ways:
-* The Logic Apps service makes a best effort to cancel any in-progress and pending runs.
+* Azure Logic Apps makes a best effort to cancel any in-progress and pending runs.
Even with a large volume or backlog, most runs are canceled before they finish or start. However, the cancellation process might take time to complete. Meanwhile, some runs might get picked up for execution while the runtime works through the cancellation process.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. You have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions. ## Show logic apps in CLI
-You can get a specific logic app workflow using the command [`az logic workflow show`](/cli/azure/logic/workflow#az-logic-workflow-show).
+To get a specific logic app workflow, use the command [`az logic workflow show`](/cli/azure/logic/workflow#az-logic-workflow-show).
```azurecli az logic workflow show --name
Your command must include the following [required parameters](/cli/azure/logic/w
| Resource group name | `--resource-group -g` | The name of the resource group in which your logic app is located. | ||||
-### Example - get logic app
+### Example - Get logic app
In this example, the logic app `testLogicApp` in the resource group `testResourceGroup` is returned with full logs for debugging.
az logic workflow show --resource-group "testResourceGroup" --name "testLogicApp
## List logic apps in CLI
-You can list your logic apps by subscription using the command [`az logic workflow list`](/cli/azure/logic/workflow#az-logic-workflow-list). This command returns the JSON code for your logic apps' workflows.
+To list your logic apps by subscription, use the command [`az logic workflow list`](/cli/azure/logic/workflow#az-logic-workflow-list). This command returns the JSON code for your logic app workflows.
You can filter your results by the following [optional parameters](/cli/azure/logic/workflow#az-logic-workflow-list-optional-parameters):
az logic workflow list [--filter]
[--top] ```
-### Example - list logic apps
+### Example - List logic apps
In this example, all enabled workflows in the resource group `testResourceGroup` are returned in an ASCII table format.
az logic workflow list --resource-group "testResourceGroup" --filter "(State eq
## Errors
-The following error indicates that the Azure Logic Apps CLI extension isn't installed. Follow the steps in the prerequisites to [install the Logic Apps extension](#prerequisites) on your computer.
+The following error indicates that the Azure Logic Apps CLI extension isn't installed. Follow the steps in the [prerequisites to install the Logic Apps extension](#prerequisites) on your computer.
```output az: 'logic' is not in the 'az' command group. See 'az --help'. If the command is from an extension, please make sure the corresponding extension is installed. To learn more about extensions, please visit https://docs.microsoft.com/cli/azure/azure-cli-extensions-overview
You can use the following optional global Azure CLI parameters with your `az log
For more information on the Azure CLI, see the [Azure CLI documentation](/cli/azure/).
-You can find additional Logic Apps CLI script samples in [Microsoft's code samples browser](/samples/browse/?products=azure-logic-apps).
+You can find additional Azure Logic Apps CLI script samples in [Microsoft's code samples browser](/samples/browse/?products=azure-logic-apps).
Next, you can create an example app logic through the Azure CLI using a sample script and workflow definition.
logic-apps Quickstart Logic Apps Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-powershell.md
Title: Quickstart - Create and manage workflows with Azure PowerShell in multi-tenant Azure Logic Apps
-description: Using PowerShell, create and manage logic app workflows in multi-tenant Azure Logic Apps.
+ Title: Quickstart - Create and manage workflows with Azure PowerShell
+description: Using PowerShell, create and manage logic app workflows with Azure Logic Apps.
ms.suite: integration Previously updated : 07/26/2021 Last updated : 05/03/2022
-# Quickstart: Create and manage workflows using Azure PowerShell in multi-tenant Azure Logic Apps
+# Quickstart: Create and manage workflows with Azure PowerShell in Azure Logic Apps
-This quickstart shows you how to create and manage logic apps by using [Azure PowerShell](/powershell/azure/install-az-ps). From PowerShell, you can create a logic app by using the JSON file for a logic app workflow definition. You can then manage your logic app by running the cmdlets in the [Az.LogicApp](/powershell/module/az.logicapp/) PowerShell module.
+This quickstart shows how to create and manage automated workflows that run in Azure Logic Apps by using [Azure PowerShell](/powershell/azure/install-az-ps). From PowerShell, you can create a [Consumption logic app](logic-apps-overview.md#resource-environment-differences) in multi-tenant Azure Logic Apps by using the JSON file for a logic app workflow definition. You can then manage your logic app by running the cmdlets in the [Az.LogicApp](/powershell/module/az.logicapp/) PowerShell module.
-If you're new to Azure Logic Apps, you can also learn how to create your first logic apps [through the Azure portal](quickstart-create-first-logic-app-workflow.md), [in Visual Studio](quickstart-create-logic-apps-with-visual-studio.md), and [in Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md).
+> [!NOTE]
+>
+> This quickstart currently applies only to Consumption logic app workflows that run in multi-tenant
+> Azure Logic Apps. Azure PowerShell is currently unavailable for Standard logic app workflows that
+> run in single-tenant Azure Logic Apps. For more information, review [Resource type and host differences in Azure Logic Apps](logic-apps-overview.md#resource-environment-differences).
+
+If you're new to Azure Logic Apps, learn how to create your first Consumption logic app workflow [through the Azure portal](quickstart-create-first-logic-app-workflow.md), [in Visual Studio](quickstart-create-logic-apps-with-visual-studio.md), or [in Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md).
## Prerequisites * An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).+ * The [Az PowerShell](/powershell/azure/install-az-ps) module installed on your local computer.+ * An [Azure resource group](#examplecreate-resource-group) in which to create your logic app.
-### Prerequisite check
+## Prerequisites check
-Validate your environment before you begin:
+Before you start, validate your environment:
* Sign in to the Azure portal and check that your subscription is active by running [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount).+ * Check your version of Azure PowerShell by running `Get-InstalledModule -Name Az`. For the latest version, see the [latest release notes](/powershell/azure/migrate-az-6.0.0).
- * If you don't have the latest version, update your installation by following [Update the Azure PowerShell module](/powershell/azure/install-az-ps#update-the-azure-powershell-module).
-### Example - create resource group
+ If you don't have the latest version, update your installation by following the steps for [Update the Azure PowerShell module](/powershell/azure/install-az-ps#update-the-azure-powershell-module).
+
+### Example - Create resource group
If you don't already have a resource group for your logic app, create the group with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. For example, the following command creates a resource group named `testResourceGroup` in the location `westus`.
When you run the commands to create or update your logic app, your workflow defi
## Create logic apps from PowerShell
-You can create a logic app workflow from Azure PowerShell using the cmdlet [`New-AzLogicApp`](/powershell/module/az.logicapp/new-azlogicapp) with a JSON file for the definition.
+To create a logic app workflow from Azure PowerShell, use the cmdlet [`New-AzLogicApp`](/powershell/module/az.logicapp/new-azlogicapp) with a JSON file for the definition.
-### Example - create logic app
+### Example - Create logic app
This example creates a workflow named `testLogicApp` in the resource group `testResourceGroup` with the location `westus`. The JSON file `testDefinition.json` contains the workflow definition.
When your workflow is successfully created, PowerShell shows your new workflow d
## Update logic apps from PowerShell
-You can also update a logic app's workflow from Azure PowerShell using the cmdlet [`Set-AzLogicApp`](/powershell/module/az.logicapp/set-azlogicapp).
+To update a logic app's workflow from Azure PowerShell, use the cmdlet [`Set-AzLogicApp`](/powershell/module/az.logicapp/set-azlogicapp).
-### Example - update logic app
+### Example - Update logic app
This example shows how to update the [sample workflow created in the previous section](#examplecreate-logic-app) using a different JSON definition file, `newTestDefinition.json`.
When your workflow is successfully updated, PowerShell shows your logic app's up
## Delete logic apps from PowerShell
-You can delete a logic app's workflow from Azure PowerShell using the cmdlet [`Remove-AzLogicApp`](/powershell/module/az.logicapp/remove-azlogicapp).
+To delete a logic app's workflow from Azure PowerShell, use the cmdlet [`Remove-AzLogicApp`](/powershell/module/az.logicapp/remove-azlogicapp).
-### Example - delete logic app
+### Example - Delete logic app
This example deletes the [sample workflow created in a previous section](#examplecreate-logic-app).
Remove-AzLogicApp -ResourceGroupName testResourceGroup -Name testLogicApp
After you respond to the confirmation prompt with `y`, the logic app is deleted.
-### Considerations - delete logic app
+### Considerations - Delete logic app
Deleting a logic app affects workflow instances in the following ways:
-* The Logic Apps service makes a best effort to cancel any in-progress and pending runs.
+* Azure Logic Apps makes a best effort to cancel any in-progress and pending runs.
Even with a large volume or backlog, most runs are canceled before they finish or start. However, the cancellation process might take time to complete. Meanwhile, some runs might get picked up for execution while the runtime works through the cancellation process.
-* The Logic Apps service doesn't create or run new workflow instances.
+* Azure Logic Apps doesn't create or run new workflow instances.
* If you delete a workflow and then recreate the same workflow, the recreated workflow won't have the same metadata as the deleted workflow. You have to resave any workflow that called the deleted workflow. That way, the caller gets the correct information for the recreated workflow. Otherwise, calls to the recreated workflow fail with an `Unauthorized` error. This behavior also applies to workflows that use artifacts in integration accounts and workflows that call Azure functions. ## Show logic apps in PowerShell
-You can get a specific logic app workflow using the command [`Get-AzLogicApp`](/powershell/module/az.logicapp/get-azlogicapp).
+To get a specific logic app workflow, use the command [`Get-AzLogicApp`](/powershell/module/az.logicapp/get-azlogicapp).
-### Example - get logic app
+### Example - Get logic app
This example returns the logic app `testLogicApp` in the resource group `testResourceGroup`.
logic-apps Set Up Zone Redundancy Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md
To provide resiliency and distributed availability, at least three separate avai
This article provides a brief overview about considerations for using availability zones in Azure Logic Apps and how to enable this capability for your Consumption logic app.
+> [!NOTE]
+>
+> Standard logic apps that use [App Service Environment v3 (ASE v3)](../app-service/environment/overview-zone-redundancy.md)
+> support zone redundancy with availability zones, but only for built-in operations. Currently, support is unavailable
+> for Azure (managed) connectors.
+ ## Considerations During preview, the following considerations apply: * The following list includes the Azure regions where you can currently enable availability zones with the list expanding as available:
- - Brazil South
- - Canada Central
- - France Central
+ * Brazil South
+ * Canada Central
+ * France Central
* Azure Logic Apps currently supports the option to enable availability zones *only for new Consumption logic app workflows* that run in multi-tenant Azure Logic Apps.
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
Title: How to configure data sources for Azure Managed Grafana Preview with Managed Identity
+ Title: How to configure data sources for Azure Managed Grafana Preview
description: In this how-to guide, discover how you can configure data sources for Azure Managed Grafana using Managed Identity.
Last updated 3/31/2022
-# How to configure data sources for Azure Managed Grafana Preview with Managed Identity
+# How to configure data sources for Azure Managed Grafana Preview
## Prerequisites
You can find all available Grafana data sources by going to your workspace and s
:::image type="content" source="media/managed-grafana-how-to-source-plugins.png" alt-text="Screenshot of the Add data source page.":::
-## Default data sources in an Azure Managed Grafana workspace
+## Default configuration for Azure Monitor
The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow these steps in your workspace endpoint:
The Azure Monitor data source is automatically added to all new Managed Grafana
Authentication and authorization are subsequently made through the provided managed identity. With Managed Identity, you can assign permissions for your Managed Grafana workspace to access Azure Monitor data without having to manually manage service principals in Azure Active Directory (Azure AD).
-## Manually assign permissions for Managed Grafana to access data in Azure
-
-Azure Managed Grafana automatically configures the **Monitoring Reader** role for accessing all the Azure Monitor data and Log Analytics resources in your subscription. To change this:
-
-1. Go to the Log Analytics resource that contains the monitoring data you want to visualize.
-1. Select **Access Control (IAM)**.
-1. Search for your Managed Grafana workspace and change the permission.
- ## Next steps > [!div class="nextstepaction"]
+> [Modify access permissions to Azure Monitor](./how-to-permissions.md)
> [Share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
Title: How to configure permissions for Azure Managed Grafana
-description: Learn how to manually configure access permissions with roles for your Azure Managed Grafana Preview workspace
+ Title: How to modify access permissions to Azure Monitor
+description: Learn how to manually set up permissions that allow your Azure Managed Grafana Preview workspace to access a data source
Last updated 3/31/2022
-# How to configure permissions for Azure Managed Grafana Preview
+# How to modify access permissions to Azure Monitor
By default, when a Grafana workspace is created, Azure Managed Grafana grants it the Monitoring Reader role for all Azure Monitor data and Log Analytics resources within a subscription.
In this article, you'll learn how to manually edit permissions for a specific re
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
-## Assign permissions for an Azure Managed Grafana workspace to access data in Azure
+## Edit Azure Monitor permissions for an Azure Managed Grafana workspace
-To edit permissions for a specific resource, follow these steps:
+To change permissions for a specific resource, follow these steps:
1. Open a resource that contains the monitoring data you want to retrieve. In this example, we're configuring an Application Insights resource. 1. Select **Access Control (IAM)**.
To edit permissions for a specific resource, follow these steps:
## Next steps > [!div class="nextstepaction"]
-> [Configure data source plugins for Azure Managed Grafana with Managed Identity](./how-to-data-source-plugins-managed-identity.md)
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
managed-grafana How To Share Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-grafana-workspace.md
Azure Managed Grafana supports the Admin, Viewer and Editor roles:
The Admin role is automatically assigned to the creator of a Grafana workspace. More details on Admin, Editor, and Viewer roles can be found at [Grafana organization roles](https://grafana.com/docs/grafana/latest/permissions/organization_roles/#compare-roles).
-Grafana user roles and assignments are fully integrated with the Azure Active Directory. You can manage these permissions from the Azure portal or the command line. This section explains how to assign users to the Viewer or Editor role in the Azure portal.
+Grafana user roles and assignments are fully integrated with the Azure Active Directory (Azure AD). You can add any Azure AD user or security group to a Grafana role and grant them access permissions associated with that role. You can manage these permissions from the Azure portal or the command line. This section explains how to assign users to the Viewer or Editor role in the Azure portal.
+
+> [!NOTE]
+> Azure Managed Grafana doesn't support personal [Microsoft accounts](https://account.microsoft.com) (a.k.a., MSA) currently.
## Sign in to Azure
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
## Next steps > [!div class="nextstepaction"]
-> [Configure permissions for Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
-> [Configure data source plugins for Azure Managed Grafana with Managed Identity](./how-to-data-source-plugins-managed-identity.md)
-> [How to call Grafana APIs in your automation with Azure Managed Grafana Preview](./how-to-api-calls.md)
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+> [How to modify access permissions to Azure Monitor](./how-to-permissions.md)
+> [How to call Grafana APIs in your automation with Azure Managed Grafana](./how-to-api-calls.md)
managed-grafana Quickstart Managed Grafana Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-portal.md
You can now start interacting with the Grafana application to configure data sou
## Next steps > [!div class="nextstepaction"]
-> [Configure permissions for Azure Managed Grafana Preview](./how-to-data-source-plugins-managed-identity.md)
-> [Configure data source plugins for Azure Managed Grafana Preview with Managed Identity](./how-to-data-source-plugins-managed-identity.md)
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+> [How to modify access permissions to Azure Monitor](./how-to-permissions.md)
migrate How To Automate Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-automate-migration.md
ms. Previously updated : 10/30/2020 Last updated : 5/2/2022
Last updated 10/30/2020
This article helps you understand how to use scripts to migrate large number of VMware virtual machines (VMs) using the agentless method. To scale migrations, you use [Azure Migrate PowerShell module](./tutorial-migrate-vmware-powershell.md).
-The Azure Migrate VMware migration automation scripts are available for download at [Azure PowerShell Samples](https://github.com/Azure/azure-docs-powershell-samples/tree/master/azure-migrate/migrate-at-scale-vmware-agentles) repo on GitHub. The scripts can be used to migrate VMware VMs to Azure using the agentless migration method. The Azure Migrate PowerShell commands used in these scripts are documented [here](./tutorial-migrate-vmware-powershell.md).
+The Azure Migrate VMware migration automation scripts are available for download in the [Azure PowerShell Samples](https://github.com/Azure/azure-docs-powershell-samples/tree/master/azure-migrate/migrate-at-scale-vmware-agentles) repo on GitHub. The scripts can be used to migrate VMware VMs to Azure using the agentless migration method. The Azure Migrate PowerShell commands used in these scripts are documented [here](./tutorial-migrate-vmware-powershell.md).
## Current limitations-- These scripts support migration of VMware VMs with all disks. You can update the scripts if you want to selectively replicate the disks attached to a VMware VM. -- The scripts support use of assessment recommendations. If assessment recommendations aren't used, then all disks attached to the VMware VM are migrated to the same managed disk type (Standard or Premium). You can update the scripts if you want to use multiple types of managed disks with the same VM
+- These scripts support migration of VMware VMs with all its disks. You can update the scripts if you want to selectively replicate the disks attached to a VMware VM.
+- The scripts support use of assessment recommendations. If assessment recommendations aren't used, all disks attached to the VMware VM are migrated to the same managed disk type (Standard or Premium). You can update the scripts if you want to use multiple types of managed disks with the same VM.
## Prerequisites - [Complete the discovery tutorial](tutorial-discover-vmware.md) to prepare Azure and VMware for migration. - We recommend that you complete the second tutorial to [assess VMware VMs](./tutorial-assess-vmware-azure-vm.md) before migrating them to Azure.-- You have the Azure PowerShell `Az` module. If you need to install or upgrade Azure PowerShell, follow this [guide to install and configure Azure PowerShell](/powershell/azure/install-az-ps)
+- You must have the Azure PowerShell `Az` module. If you need to install or upgrade Azure PowerShell, follow this [guide to install and configure Azure PowerShell](/powershell/azure/install-az-ps).
## Install Azure Migrate PowerShell module
-Azure Migrate PowerShell module is available in preview. You'll need to install the PowerShell module using the following command.
+The Azure Migrate PowerShell module is available in preview. You'll need to install the PowerShell module using the following command.
```azurepowershell Install-Module -Name Az.Migrate ``` ## CSV input file
-Once you have all the pre-requisites completed, you need to create a CSV file that has data for each source VM that you want to migrate. All the scripts are designed to work on the same CSV file. A sample CSV template is available in the scripts folder for your reference.
+Once you have completed all the pre-requisites, you need to create a CSV file that has data of each source VM that you want to migrate. All the scripts are designed to work on the same CSV file. A sample CSV template is available in the scripts folder for your reference.
The csv file is configurable so that you can use assessment recommendations and even specify if certain operations are not to be triggered for a particular VM. > [!NOTE]
The csv file is configurable so that you can use assessment recommendations and
AZMIGRATEPROJECT_SUBSCRIPTION_ID | Provide Azure Migrate project subscription ID. AZMIGRATEPROJECT_RESOURCE_GROUP_NAME | Provide Azure Migrate resource group name. AZMIGRATEPROJECT_NAME | Provide the name of the Azure Migrate project in that you want to migrate servers.
-SOURCE_MACHINE_NAME | Provide the friendly name (display name) for the discovered VM in the Azure Migrate project.
-AZMIGRATEASSESSMENT_NAME | Provide the name of assessment that needs to be leveraged for migration.
+SOURCE_MACHINE_NAME | Provide a friendly name (display name) for the discovered VM in the Azure Migrate project.
+AZMIGRATEASSESSMENT_NAME | Provide the name of the assessment that needs to be leveraged for migration.
AZMIGRATEGROUP_NAME | Provide the name of the group that was used for the Azure Migrate assessment.
-TARGET_RESOURCE_GROUP_NAME | Provide the name of the Azure resource group to that the VM needs to be migrated to.
+TARGET_RESOURCE_GROUP_NAME | Provide the name of the Azure resource group to which the VM needs to be migrated to.
TARGET_VNET_NAME| Provide the name of the Azure Virtual Network that the migrated VM should use.
-TARGET_SUBNET_NAME | Provide the name of the subnet in the target virtual network that the migrated VM should use. If left blank, then ΓÇ£defaultΓÇ¥ subnet will be used.
-TARGET_MACHINE_NAME | Provide the name that the migrated VM should use in Azure. If left blank, then the source machine name will be used.
-TARGET_MACHINE_SIZE | Provide the SKU that the VM should use in Azure. To migrate a VM to D2_v2 VM in Azure, specify the value in this field as "Standard_D2_v2". If you use an assessment, then this value will be derived based on assessment recommendation.
-LICENSE_TYPE | Specify if you want to use Azure Hybrid Benefit for Windows Server VMs. Use value "WindowsServer" to take advantage of Azure Hybrid Benefit. Otherwise leave blank or use "NoLicenseType".
+TARGET_SUBNET_NAME | Provide the name of the subnet in the target virtual network that the migrated VM should use. If left blank, ΓÇ£defaultΓÇ¥ subnet will be used.
+TARGET_MACHINE_NAME | Provide the name that the migrated VM should use in Azure. If left blank, the source machine name will be used.
+TARGET_MACHINE_SIZE | Provide the Stock Keeping Unit (SKU) that the VM should use in Azure. To migrate a VM to D2_v2 VM in Azure, specify the value in this field as "Standard_D2_v2". If you use an assessment, this value will be derived based on the assessment recommendation.
+LICENSE_TYPE | Specify if you want to use Azure Hybrid Benefit for Windows Server VMs. Use value "WindowsServer" to take advantage of Azure Hybrid Benefit. Otherwise, leave it blank or use "NoLicenseType".
OS_DISK_ID | Provide the OS disk ID for the VM to be migrated. The disk ID to be used is the unique identifier (UUID) property for the disk retrieved using the Get-AzMigrateServer cmdlet. The script will use the first disk of the VM as the OS disk in case no value is provided.
-TARGET_DISKTYPE | Provide the disk type to be used for all disks of the VM in Azure. Use 'Premium_LRS' for premium-managed disks, 'StandardSSD_LRS' for standard SSD disks and 'Standard_LRS' to use standard HDD disks. If you choose to use an assessment, then the script will prioritize using recommended disk types for each disk of the VM. If you don't use assessment or specify any value, the script will use standard HDD disks by default.
-AVAILABILITYZONE_NUMBER | Specify the availability zone number to be used for the migrated VM. You can leave this blank in case you don't want to use availability zones.
-AVAILABILITYSET_NAME | Specify the name of the availability set to be used for the migrated VM. You can leave this blank in case you don't want to use availability set.
-TURNOFF_SOURCESERVER | Specify 'Y' if you want to turn off source VM at the time of migration. Use 'N' otherwise. If left blank, then the script assumes the value as 'N'.
+TARGET_DISKTYPE | Provide the disk type to be used for all disks of the VM in Azure. Use 'Premium_LRS' for premium-managed disks, 'StandardSSD_LRS' for standard SSD disks and 'Standard_LRS' to use standard HDD disks. If you choose to use an assessment, the script will prioritize using recommended disk types for each disk of the VM. If you don't use assessment or specify any value, the script will use standard HDD disks by default.
+AVAILABILITYZONE_NUMBER | Specify the availability zone number to be used for the migrated VM. You can leave this blank if you don't want to use availability zones.
+AVAILABILITYSET_NAME | Specify the name of the availability set to be used for the migrated VM. You can leave this blank if you don't want to use availability set.
+TURNOFF_SOURCESERVER | Specify 'Y' if you want to turn off the source VM at the time of migration. Use 'N' otherwise. If left blank, the script assumes the value as 'N'.
TESTMIGRATE_VNET_NAME | Specify the name of the virtual network to be used for test migration.
-UPDATED_TARGET_RESOURCE_GROUP_NAME | If you want to update the resource group to be used by the migrated VM in Azure, then specify the name of the Azure resource group, else leave blank.
-UPDATED_TARGET_VNET_NAME | If you want to update the Virtual Network to be used by the migrated VM in Azure, then specify the name of the Azure Virtual Network, else leave blank.
-UPDATED_TARGET_MACHINE_NAME | If you want to update the name to be used by the migrated VM in Azure, then specify the new name to be used, else leave blank.
-UPDATED_TARGET_MACHINE_SIZE | If you want to update the SKU to be used by the migrated VM in Azure, then specify the new SKU to be used, else leave blank.
-UPDATED_AVAILABILITYZONE_NUMBER | If you want to update the availability zone to be used by the migrated VM in Azure, then specify the new availability zone to be used, else leave blank.
-UPDATED_AVAILABILITYSET_NAME | If you want to update the availability set to be used by the migrated VM in Azure, then specify the new availability set to be used, else leave blank.
-UPDATE_NIC1_ID | Specify the ID of the NIC to be updated. If left blank, then the script assumes the value to be the first NIC of the discovered VM. If you don't want to update the NIC of the VM, then leave all the fields containing NIC name blank.
-UPDATED_TARGET_NIC1_SELECTIONTYPE | Specify the value to be used for this NIC. Use "Primary","Secondary" or "DoNotCreate" to specify if this NIC should be the primary, secondary, or should not be created on the migrated VM. Only one NIC can be specified as the primary NIC for the VM. Leave blank if you don't want to update.
+UPDATED_TARGET_RESOURCE_GROUP_NAME | If you want to update the resource group to be used by the migrated VM in Azure, specify the name of the Azure resource group, else leave it blank.
+UPDATED_TARGET_VNET_NAME | If you want to update the Virtual Network to be used by the migrated VM in Azure, specify the name of the Azure Virtual Network, else leave it blank.
+UPDATED_TARGET_MACHINE_NAME | If you want to update the name to be used by the migrated VM in Azure, specify the new name to be used, else leave it blank.
+UPDATED_TARGET_MACHINE_SIZE | If you want to update the SKU to be used by the migrated VM in Azure, specify the new SKU to be used, else leave it blank.
+UPDATED_AVAILABILITYZONE_NUMBER | If you want to update the availability zone to be used by the migrated VM in Azure, specify the new availability zone to be used, else leave it blank.
+UPDATED_AVAILABILITYSET_NAME | If you want to update the availability set to be used by the migrated VM in Azure, specify the new availability set to be used, else leave it blank.
+UPDATE_NIC1_ID | Specify the ID of the NIC to be updated. If left blank, the script assumes the value to be the first NIC of the discovered VM. If you don't want to update the NIC of the VM, leave all the fields containing NIC name blank.
+UPDATED_TARGET_NIC1_SELECTIONTYPE | Specify the value to be used for this NIC. Use "Primary","Secondary", or "DoNotCreate" to specify if this NIC should be the primary, secondary, or should not be created on the migrated VM. Only one NIC can be specified as the primary NIC for the VM. Leave blank if you don't want to update.
UPDATED_TARGET_NIC1_SUBNET_NAME | Specify the name of the subnet to use for the NIC on the migrated VM. Leave blank if you don't want to update. UPDATED_TARGET_NIC1_IP | Specify the IPv4 address to be used by the NIC on the migrated VM if you want to use static IP. Use "auto" if you want to automatically assign the IP. Leave blank if you don't want to update. UPDATE_NIC2_ID | Specify the ID of the NIC to be updated. If left blank, then the script assumes the value to be the second NIC of the discovered VM. If you don't want to update the NIC of the VM, then leave all the fields containing NIC name blank.
OK_TO_TESTMIGRATE_CLEANUP | Use 'Y' to indicate whether the test migration for t
## Script execution
-Once the CSV is ready, you can execute the following steps to migrate your on-premise VMware VMs.
+Once the CSV is ready, you can execute the following steps to migrate your on-premises VMware VMs.
**Step #** | **Script Name** | **Description** | |
In addition to the above, the folder also contains AzMigrate_Template.ps1 that c
Once you have downloaded the scripts, the scripts can be executed as follows.
-If you want to execute the script to start replication for VMs using the Input.csv file, then use the following syntax.
+If you want to execute the script to start replication for VMs using the Input.csv file, use the following syntax.
```azurepowershell ".\AzMigrate_StartReplication.ps1" .\Input.csv
migrate How To Migrate At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-migrate-at-scale.md
Title: Automate migration machine migration in Azure Migrate
+ Title: Automate migration of machines in Azure Migrate
description: Describes how to use scripts to migrate a large number of machines in Azure Migrate ms. Previously updated : 04/01/2019 Last updated : 5/02/2022
This article helps you understand how to use scripts to migrate large number of
Site Recovery scripts are available for your download at [Azure PowerShell Samples](https://github.com/Azure/azure-docs-powershell-samples/tree/master/azure-migrate/migrate-at-scale-with-site-recovery) repo on GitHub. The scripts can be used to migrate VMware, AWS, GCP VMs, and physical servers to managed disks in Azure. You can also use these scripts to migrate Hyper-V VMs if you migrate the VMs as physical servers. The scripts that leverage Azure Site Recovery PowerShell are documented [here](../site-recovery/vmware-azure-disaster-recovery-powershell.md). ## Current limitations-- Support specifying the static IP address only for the primary NIC of the target VM-- The scripts do not take Azure Hybrid Benefit related inputs, you need to manually update the properties of the replicated VM in the portal
+- Supports specifying the static IP address only for the primary NIC of the target VM.
+- The scripts do not take Azure Hybrid Benefit related inputs; you need to manually update the properties of the replicated VM in the portal.
## How does it work? ### Prerequisites Before you get started, you need to do the following steps:-- Ensure that the Site Recovery vault is created in your Azure subscription-- Ensure that the Configuration Server and Process Server are installed in the source environment and the vault is able to discover the environment-- Ensure that a Replication Policy is created and associated with the Configuration Server-- Ensure that you have added the VM admin account to the config server (that will be used to replicate the on premises VMs)-- Ensure that the target artifacts in Azure are created
+- Ensure that the Site Recovery vault is created in your Azure subscription.
+- Ensure that the Configuration Server and Process Server are installed in the source environment and the vault can discover the environment.
+- Ensure that a Replication Policy is created and associated with the Configuration Server.
+- Ensure that you have added the VM admin account to the config server (that will be used to replicate the on premises VMs).
+- Ensure that the following target artifacts in Azure are created:
- Target Resource Group - Target Storage Account (and its Resource Group) - Create a premium storage account if you plan to migrate to premium-managed disks - Cache Storage Account (and its Resource Group) - Create a standard storage account in the same region as the vault
Before you get started, you need to do the following steps:
- Target Virtual Network for Test failover (and its Resource Group) - Availability Set (if needed) - Target Network Security Group and its Resource Group-- Ensure that you have decided on the properties of the target VM
+- Ensure that you have decided on the following properties of the target VM
- Target VM name - Target VM size in Azure (can be decided using Azure Migrate assessment) - Private IP Address of the primary NIC in the VM - Download the scripts from [Azure PowerShell Samples](https://github.com/Azure/azure-docs-powershell-samples/tree/master/azure-migrate/migrate-at-scale-with-site-recovery) repo on GitHub ### CSV Input file
-Once you have all the pre-requisites completed, you need to create a CSV file, which has data for each source machine that you want to migrate. The input CSV must have a header line with the input details and a row with details for each machine that needs to be migrated. All the scripts are designed to work on the same CSV file. A sample CSV template is available in the scripts folder for your reference.
+Once you have completed all the pre-requisites, you need to create a CSV file, which has data for each source machine that you want to migrate. The input CSV must have a header line with the input details and a row with details for each machine that needs to be migrated. All the scripts are designed to work on the same CSV file. A sample CSV template is available in the scripts folder for your reference.
### Script execution Once the CSV is ready, you can execute the following steps to perform migration of the on-premises VMs:
migrate Migrate V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-v1.md
ms. Previously updated : 11/19/2019 Last updated : 9/23/2021
This article provides information about working with the previous version of Azu
There are two versions of the Azure Migrate service: - **Current version**: Use this version to create Azure Migrate projects, discover on-premises machines, and orchestrate assessments and migrations. [Learn more](whats-new.md) about what's new in this version.-- **Previous version**: If you're using the previous version of Azure Migrate (only assessment of on-premises VMware VMs was supported), you should now use the current version. The previous version projects are referred to as Classic projects in this article. Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, classic version of Azure Migrate will no longer be supported and the inventory metadata in classic projects will be deleted. If you still need to use classic Azure Migrate projects, this is what you can and can't do:
+- **Previous version**: If you're using the previous version of Azure Migrate (only assessment of on-premises VMware VMs was supported), you should now use the current version. The previous version projects are referred to as Classic projects in this article. Classic Azure Migrate is retiring in Feb 2024. After Feb 2024, the classic version of Azure Migrate will no longer be supported and the inventory metadata in classic projects will be deleted. If you still need to use classic Azure Migrate projects, this is what you can and can't do:
- You can no longer create migration projects. - We recommend that you don't perform new discoveries. - You can still access existing projects.
There are two versions of the Azure Migrate service:
## Upgrade between versions
-You can't upgrade projects or components in the previous version to the new version. You need to [create a new Azure Migrate project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a Classic project, you can attach it to a project of current version after you delete the Classic project.
+You can't upgrade projects or components in the previous version to the new version. You need to [create a new Azure Migrate project](create-manage-projects.md), and [add assessment and migration tools](./create-manage-projects.md#next-steps) to it. Use the tutorials to understand how to use the assessment and migration tools available. If you had a Log Analytics workspace attached to a Classic project, you can attach it to a project of current version after you delete the Classic project.
## Find projects from previous version
Find and delete projects from the previous version as follows:
After VMs are discovered in the portal, you group them and create assessments. -- You can immediately create as on-premises assessments immediately after VMs are discovered in the portal.
+- You can create on-premises assessments immediately after VMs are discovered in the portal.
- For performance-based assessments, we recommend you wait at least a day before creating a performance-based assessment, to get reliable size recommendations. Create an assessment as follows:
The Azure readiness view in the assessment shows the readiness status of each VM
| | Ready for Azure | No compatibility issues. The machine can be migrated as-is to Azure, and it will boot in Azure with full Azure support. | For VMs that are ready, Azure Migrate recommends a VM size in Azure. Conditionally ready for Azure | The machine might boot in Azure, but might not have full Azure support. For example, a machine with an older version of Windows Server that isn't supported in Azure. | Azure Migrate explains the readiness issues, and provides remediation steps.
-Not ready for Azure | The VM won't boot in Azure. For example, if a VM has a disk that's more than 4 TB, it can't be hosted on Azure. | Azure Migrate explains the readiness issues, and provides remediation steps.
+Not ready for Azure | The VM won't boot in Azure. For example, if a VM has a disk that's more than 4 TB, it can't be hosted on Azure. | Azure Migrate explains the readiness issues and provides remediation steps.
Readiness unknown | Azure Migrate can't identify Azure readiness, usually because data isn't available.
Readiness takes into account a number of VM properties, to identify whether the
| | **Boot type** | BIOS supported. UEFI not supported. | Conditionally ready if boot type is UEFI. **Cores** | Machines core <= the maximum number of cores (128) supported for an Azure VM.<br/><br/> If performance history is available, Azure Migrate considers the utilized cores.<br/>If a comfort factor is specified in the assessment settings, the number of utilized cores is multiplied by the comfort factor.<br/><br/> If there's no performance history, Azure Migrate uses the allocated cores, without applying the comfort factor. | Ready if less than or equal to limits.
-**Memory** | The machine memory size <= the maximum memory (3892 GB on Azure M series Standard_M128m&nbsp;<sup>2</sup>) for an Azure VM. [Learn more](../virtual-machines/sizes.md).<br/><br/> If performance history is available, Azure Migrate considers the utilized memory.<br/><br/>If a comfort factor is specified, the utilized memory is multiplied by the comfort factor.<br/><br/> If there's no history the allocated memory is used, without applying the comfort factor.<br/><br/> | Ready if within limits.
+**Memory** | The machine memory size <= the maximum memory (3892 GB on Azure M series Standard_M128m&nbsp;<sup>2</sup>) for an Azure VM. [Learn more](../virtual-machines/sizes.md).<br/><br/> If performance history is available, Azure Migrate considers the utilized memory.<br/><br/>If a comfort factor is specified, the utilized memory is multiplied by the comfort factor.<br/><br/> If there's no history, the allocated memory is used, without applying the comfort factor.<br/><br/> | Ready if within limits.
**Storage disk** | Allocated size of a disk must be 4 TB (4096 GB) or less.<br/><br/> The number of disks attached to the machine must be 65 or less, including the OS disk. | Ready if within limits. **Networking** | A machine must have 32 or less NICs attached to it. | Ready if within limits.
Readiness takes into account a number of VM properties, to identify whether the
Along with VM properties, Azure Migrate also looks at the guest OS of the on-premises VM to identify if the VM can run in Azure. -- Azure Migrate considers the OS specified in vCenter Server.
+- Azure Migrate considers the OS specified in the vCenter Server.
- Since the discovery done by Azure Migrate is appliance-based, it does not have a way to verify if the OS running inside the VM is same as the one specified in vCenter Server. The following logic is used.
Windows Server 2012 R2 and all SPs | Azure provides full support. | Ready for Az
Windows Server 2012 and all SPs | Azure provides full support. | Ready for Azure Windows Server 2008 R2 and all SPs | Azure provides full support.| Ready for Azure Windows Server 2008 (32-bit and 64-bit) | Azure provides full support. | Ready for Azure
-Windows Server 2003, 2003 R2 | Out-of-support and need a [Custom Support Agreement (CSA)](/troubleshoot/azure/virtual-machines/server-software-support) for support in Azure. | Conditionally ready for Azure, consider upgrading the OS before migrating to Azure.
-Windows 2000, 98, 95, NT, 3.1, MS-DOS | Out-of-support. The machine might boot in Azure, but no OS support is provided by Azure. | Conditionally ready for Azure, it is recommended to upgrade the OS before migrating to Azure.
-Windows Client 7, 8 and 10 | Azure provides support with [Visual Studio subscription only.](../virtual-machines/windows/client-images.md) | Conditionally ready for Azure
-Windows 10 Pro Desktop | Azure provides support with [Multitenant Hosting Rights.](../virtual-machines/windows/windows-desktop-multitenant-hosting-deployment.md) | Conditionally ready for Azure
-Windows Vista, XP Professional | Out-of-support. The machine might boot in Azure, but no OS support is provided by Azure. | Conditionally ready for Azure, it is recommended to upgrade the OS before migrating to Azure.
+Windows Server 2003, 2003 R2 | Out-of-support and need a [Custom Support Agreement (CSA)](/troubleshoot/azure/virtual-machines/server-software-support) for support in Azure. | Conditionally ready for Azure. Consider upgrading the OS before migrating to Azure.
+Windows 2000, 98, 95, NT, 3.1, MS-DOS | Out-of-support. The machine might boot in Azure, but no OS support is provided by Azure. | Conditionally ready for Azure. It is recommended to upgrade the OS before migrating to Azure.
+Windows Client 7, 8 and 10 | Azure provides support with [Visual Studio subscription only.](../virtual-machines/windows/client-images.md) | Conditionally ready for Azure.
+Windows 10 Pro Desktop | Azure provides support with [Multitenant Hosting Rights](../virtual-machines/windows/windows-desktop-multitenant-hosting-deployment.md). | Conditionally ready for Azure.
+Windows Vista, XP Professional | Out-of-support. The machine might boot in Azure, but no OS support is provided by Azure. | Conditionally ready for Azure. It is recommended to upgrade the OS before migrating to Azure.
Linux | Azure endorses these [Linux operating systems](../virtual-machines/linux/endorsed-distros.md). Other Linux operating systems might boot in Azure, but we recommend upgrading the OS to an endorsed version, before migrating to Azure. | Ready for Azure if the version is endorsed.<br/><br/>Conditionally ready if the version is not endorsed.
-Other operating systems<br/><br/> For example, Oracle Solaris, Apple macOS etc., FreeBSD, etc. | Azure doesn't endorse these operating systems. The machine may boot in Azure, but no OS support is provided by Azure. | Conditionally ready for Azure, it is recommended to install a supported OS before migrating to Azure.
+Other operating systems<br/><br/> For example, Oracle Solaris, Apple macOS etc., FreeBSD, etc. | Azure doesn't endorse these operating systems. The machine may boot in Azure, but no OS support is provided by Azure. | Conditionally ready for Azure. It is recommended to install a supported OS before migrating to Azure.
OS specified as **Other** in vCenter Server | Azure Migrate cannot identify the OS in this case. | Unknown readiness. Ensure that the OS running inside the VM is supported in Azure. 32-bit operating systems | The machine may boot in Azure, but Azure may not provide full support. | Conditionally ready for Azure, consider upgrading the OS of the machine from 32-bit OS to 64-bit OS before migrating to Azure.
Cost estimates show the total compute and storage cost of running the VMs in Azu
- Cost estimates are calculated using the size recommendation for a VM machine, and its disks, and the assessment properties. - Estimated monthly costs for compute and storage are aggregated for all VMs in the group.-- The cost estimation is for running the on-premises VM as Azure Infrastructure as a service (IaaS) VMs. Azure Migrate doesn't consider Platform as a service (PaaS), or Software as a service (SaaS) costs.
+- The cost estimation is for running the on-premises VM as Azure Infrastructure as a service (IaaS) VMs. Azure Migrate doesn't consider costs for Platform as a service (PaaS) or Software as a service (SaaS).
### Review confidence rating (performance-based assessment)
An assessment might not have all the data points available due to a number of re
- You didn't profile your environment for the duration of the assessment. For example, if you create the assessment with performance duration set to one day, you must wait for at least a day after you start the discovery, or all the data points to be collected. - Some VMs were shut down during the period for which the assessment was calculated. If any VMs were powered off for part of the duration, Azure Migrate can't collect performance data for that period.-- Some VMs were created in between during the assessment calculation period. For example, if you create an assessment using the last month's performance history, but create a number of VMs in the environment a week ago, the performance history of the new VMs won't be for the entire duration.
+- Some VMs were created in between during the assessment calculation period. For example, if you create an assessment using the last month's performance history but create a number of VMs in the environment a week ago, the performance history of the new VMs won't be for the entire duration.
> [!NOTE] > If the confidence rating of any assessment is below five-stars, wait for at least a day for the appliance to profile the environment, and then recalculate the assessment. If you don't performance-based sizing might not be reliable. If you don't want to recalculate, we recommended switching to as on-premises sizing, by changing the assessment properties.
To set up dependency visualization, you associate a Log Analytics workspace with
To use dependency visualization, you associate a Log Analytics workspace with a migration project. You can only create or attach a workspace in the same subscription where the migration project is created.
-1. To attach a Log Analytics workspace to a project, in **Overview**, > **Essentials**, click **Requires configuration**.
+1. To attach a Log Analytics workspace to a project, in **Overview** > **Essentials**, click **Requires configuration**.
2. You can create a new workspace, or attach an existing one:
- - To create a new workspace, specify a name. The workspace is created in a region in the same [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) as the migration project.
- - When you attach an existing workspace, you can pick from all the available workspaces in the same subscription as the migration project. Only those workspaces are listed which were created in a [supported Service Map region](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions). To attach a workspace, ensure that you have 'Reader' access to the workspace.
+ - To create a new workspace, specify a name. The workspace is created in a region in the same [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/) as the migration project.
+ - When you attach an existing workspace, you can pick from all the available workspaces in the same subscription as the migration project. Only those workspaces are listed which were created in a [supported Service Map region](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions). To attach a workspace, ensure that you have 'Reader' access to the workspace.
> [!NOTE] > You can't change the workspace associated with a migration project.
After you configure a workspace, you download and install agents on each on-prem
4. Copy the workspace ID and key. You need these when you install the MMA on the on-premises machine. > [!NOTE]
-> To automate the installation of agents you can use a deployment tool such as Configuration Manager or a partner tool such as, Intigua, that provides an agent deployment solution for Azure Migrate.
+> To automate the installation of agents you can use a deployment tool such as Configuration Manager or a partner tool such as Intigua, that provides an agent deployment solution for Azure Migrate.
#### Install the MMA agent on a Windows machine
To install the agent on a Linux machine:
### Install the MMA agent on a machine monitored by Operations Manager
-For machines monitored by System Center Operations Manager 2012 R2 or later, there is no need to install the MMA agent. Service Map integrates with the Operations Manager MMA to gather the necessary dependency data. [Learn more](../azure-monitor/vm/service-map-scom.md#prerequisites). The Dependency agent does need to be installed.
+For machines monitored by System Center Operations Manager 2012 R2 or later, there is no need to install the MMA agent. Service Map integrates with the Operations Manager MMA to gather the necessary dependency data. [Learn more](../azure-monitor/vm/service-map-scom.md#prerequisites). The Dependency agent needs to be installed.
### Install the Dependency agent
For machines monitored by System Center Operations Manager 2012 R2 or later, the
`sh InstallDependencyAgent-Linux64.bin` -- Learn more about the [Dependency agent support](../azure-monitor/vm/vminsights-enable-overview.md#supported-operating-systems) for the Windows and Linux operating systems.-- [Learn more](../azure-monitor/vm/vminsights-enable-hybrid.md#dependency-agent) about how you can use scripts to install the Dependency agent.
+ - Learn more about the [Dependency agent support](../azure-monitor/vm/vminsights-enable-overview.md#supported-operating-systems) for the Windows and Linux operating systems.
+ - [Learn more](../azure-monitor/vm/vminsights-enable-hybrid.md#dependency-agent) about how you can use scripts to install the Dependency agent.
>[!NOTE] > The Azure Monitor for VMs article referenced to provide an overview of the system prerequisites and methods to deploy the Dependency agent are also applicable to the Service Map solution. ### Create a group with dependency mapping
-1. After you install the agents, go to the portal and click **Manage** > **Machines**.
+1. After you install the agents, go to the portal, and click **Manage** > **Machines**.
2. Search for the machine where you installed the agents. 3. The **Dependencies** column for the machine should now show as **View Dependencies**. Click the column to view the dependencies of the machine. 4. The dependency map for the machine shows the following details:
For machines monitored by System Center Operations Manager 2012 R2 or later, the
- The dependent machines that do not have the MMA and dependency agent installed are grouped by port numbers. - The dependent machines that have the MMA and the dependency agent installed are shown as separate boxes. - Processes running inside the machine, you can expand each machine box to view the processes
- - Machine properties, including the FQDN, operating System, MAC address are shown. You can click on each machine box to view details.
+ - Machine properties, including the FQDN, operating System, MAC address are shown. You can click on each machine box to view the details.
-4. You can view dependencies for different time durations by clicking on the time duration in the time range label. By default the range is an hour. You can modify the time range, or specify start and end dates, and duration.
+4. You can view dependencies for different time durations by clicking on the time duration in the time range label. By default the range is an hour. You can modify the time range, or specify start and end dates, and the duration.
> [!NOTE] > A time range of up to an hour is supported. Use Azure Monitor logs to [query dependency data](./how-to-create-group-machine-dependencies.md) over a longer duration.
migrate Scale Hyper V Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/scale-hyper-v-assessment.md
ms. Previously updated : 07/10/2019 Last updated : 05/02/2022 # Assess large numbers of servers in Hyper-V environment for migration to Azure
-This article describes how to assess large numbers of on-premises servers in Hyper-V environment for migration to Azure, using the Azure Migrate Discovery and assessment tool.
+This article describes how to assess large numbers of on-premises servers in Hyper-V environment for migration to Azure, using the Azure Migrate: Discovery and assessment tool.
[Azure Migrate](migrate-services-overview.md) provides a hub of tools that help you to discover, assess, and migrate apps, infrastructure, and workloads to Microsoft Azure. The hub includes Azure Migrate tools, and third-party independent software vendor (ISV) offerings.
This article describes how to assess large numbers of on-premises servers in Hyp
In this article, you learn how to: > [!div class="checklist"] > * Plan for assessment at scale.
-> * Configure Azure permissions, and prepare Hyper-V for assessment.
-> * Create an Azure Migrate project, and create an assessment.
+> * Configure Azure permissions and prepare Hyper-V for assessment.
+> * Create an Azure Migrate project and create an assessment.
> * Review the assessment as you plan for migration. > [!NOTE]
-> If you want to try out a proof-of-concept to assess a couple of servers before assessing at scale, follow our [tutorial series](./tutorial-discover-hyper-v.md)
+> If you want to try out a proof-of-concept to assess a couple of servers before assessing at scale, follow our [tutorial series](./tutorial-discover-hyper-v.md).
## Plan for assessment
-When planning for assessment of large number of servers in Hyper-V environment, there are a couple of things to think about:
+When planning for assessment of a large number of servers in Hyper-V environment, there are a couple of things to think about:
-- **Plan Azure Migrate projects**: Figure out how to deploy Azure Migrate projects. For example, if your data centers are in different geographies, or you need to store discovery, assessment or migration-related metadata in a different geography, you might need multiple projects.
+- **Plan Azure Migrate projects**: Figure out how to deploy Azure Migrate projects. For example, if your data centers are in different geographies, or if you need to store discovery, assessment, or migration-related metadata in a different geography, you might need multiple projects.
- **Plan appliances**: Azure Migrate uses an on-premises Azure Migrate appliance, deployed as a Hyper-V VM, to continually discover servers for assessment and migration. The appliance monitors environment changes such as adding servers, disks, or network adapters. It also sends metadata and performance data about them to Azure. You need to figure out how many appliances to deploy.
Use the limits summarized in this table for planning.
## Other planning considerations -- To start discovery from the appliance, you have to select each Hyper-V host.
+- To start discovery from the appliance, you must select each Hyper-V host.
- If you're running a multi-tenant environment, you can't currently discover only servers that belong to a specific tenant. ## Prepare for assessment
-Prepare Azure and Hyper-V for Discovery and assessment tool:
+Prepare Azure and Hyper-V for the Discovery and assessment tool:
1. Verify [Hyper-V support requirements and limitations](migrate-support-matrix-hyper-v.md).
-2. Set up permissions for your Azure account to interact with Azure Migrate
-3. Prepare Hyper-V hosts and servers
+2. Set up permissions for your Azure account to interact with Azure Migrate.
+3. Prepare Hyper-V hosts and servers.
Follow the instructions in [this tutorial](./tutorial-discover-hyper-v.md) to configure these settings.
In accordance with your planning requirements, do the following:
1. Create an Azure Migrate projects. 2. Add the Azure Migrate Discovery and assessment tool to the projects.
-[Learn more](./create-manage-projects.md)
+[Learn more](./create-manage-projects.md) about creating a project.
## Create and review an assessment
-1. Create assessments for servers in Hyper-V environment.
+1. Create assessments for servers in a Hyper-V environment.
1. Review the assessments in preparation for migration planning. [Learn more](tutorial-assess-hyper-v.md) about creating and reviewing assessments.
migrate Scale Physical Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/scale-physical-assessment.md
ms. Previously updated : 01/19/2020 Last updated : 05/02/2022 # Assess large numbers of physical servers for migration to Azure
-This article describes how to assess large numbers of on-premises physical servers for migration to Azure, using the Azure Migrate Discovery and assessment tool.
+This article describes how to assess large numbers of on-premises physical servers for migration to Azure, using the Azure Migrate: Discovery and assessment tool.
[Azure Migrate](migrate-services-overview.md) provides a hub of tools that help you to discover, assess, and migrate apps, infrastructure, and workloads to Microsoft Azure. The hub includes Azure Migrate tools, and third-party independent software vendor (ISV) offerings.
This article describes how to assess large numbers of on-premises physical serve
In this article, you learn how to: > [!div class="checklist"] > * Plan for assessment at scale.
-> * Configure Azure permissions, and prepare physical servers for assessment.
+> * Configure Azure permissions and prepare physical servers for assessment.
> * Create an Azure Migrate project, and create an assessment. > * Review the assessment as you plan for migration.
In this article, you learn how to:
When planning for assessment of large number of physical servers, there are a couple of things to think about: -- **Plan Azure Migrate projects**: Figure out how to deploy Azure Migrate projects. For example, if your data centers are in different geographies, or you need to store discovery, assessment or migration-related metadata in a different geography, you might need multiple projects.
+- **Plan Azure Migrate projects**: Figure out how to deploy Azure Migrate projects. For example, if your data centers are in different geographies, or you need to store discovery, assessment, or migration-related metadata in a different geography, you might need multiple projects.
- **Plan appliances**: Azure Migrate uses an on-premises Azure Migrate appliance, deployed on a Windows server, to continually discover servers for assessment and migration. The appliance monitors environment changes such as adding servers, disks, or network adapters. It also sends metadata and performance data about them to Azure. You need to figure out how many appliances to deploy.
Use the limits summarized in this table for planning.
## Other planning considerations -- To start discovery from the appliance, you have to select each physical server.
+- To start discovery from the appliance, you must select each physical server.
## Prepare for assessment
In accordance with your planning requirements, do the following:
1. Create an Azure Migrate project. 2. Add the Azure Migrate Discovery and assessment tool to the projects.
-[Learn more](./create-manage-projects.md)
+[Learn more](./create-manage-projects.md) about creating projects.
## Create and review an assessment
migrate Scale Vmware Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/scale-vmware-assessment.md
ms. Previously updated : 03/23/2020 Last updated : 05/02/2022 # Assess large numbers of servers in VMware environment for migration to Azure
-This article describes how to assess large numbers (1000-35,000) of on-premises servers in VMware environment for migration to Azure, using the Azure Migrate Discovery and assessment tool.
+This article describes how to assess large numbers (1000-35,000) of on-premises servers in a VMware environment for migration to Azure, using the Azure Migrate Discovery and assessment tool.
[Azure Migrate](migrate-services-overview.md) provides a hub of tools that help you to discover, assess, and migrate apps, infrastructure, and workloads to Microsoft Azure. The hub includes Azure Migrate tools, and third-party independent software vendor (ISV) offerings. In this article, you learn how to: > [!div class="checklist"] > * Plan for assessment at scale.
-> * Configure Azure permissions, and prepare VMware for assessment.
-> * Create an Azure Migrate project, and create an assessment.
+> * Configure Azure permissions and prepare VMware for assessment.
+> * Create an Azure Migrate project and create an assessment.
> * Review the assessment as you plan for migration. > [!NOTE]
-> If you want to try out a proof-of-concept to assess a couple of servers before assessing at scale, follow our [tutorial series](./tutorial-discover-vmware.md)
+> If you want to try out a proof-of-concept to assess a couple of servers before assessing at scale, follow our [tutorial series](./tutorial-discover-vmware.md).
## Plan for assessment When planning for assessment of large number of servers in VMware environment, there are a couple of things to think about: -- **Plan Azure Migrate projects**: Figure out how to deploy Azure Migrate projects. For example, if your data centers are in different geographies, or you need to store discovery, assessment or migration-related metadata in a different geography, you might need multiple projects.
+- **Plan Azure Migrate projects**: Figure out how to deploy Azure Migrate projects. For example, if your data centers are in different geographies, or you need to store discovery, assessment, or migration-related metadata in a different geography, you might need multiple projects.
- **Plan appliances**: Azure Migrate uses an on-premises Azure Migrate appliance, deployed as a VMware VM, to continually discover servers. The appliance monitors environment changes such as adding servers, disks, or network adapters. It also sends metadata and performance data about them to Azure. You need to figure out how many appliances you need to deploy.-- **Plan accounts for discovery**: The Azure Migrate appliance uses an account with access to vCenter Server in order to discover servers for assessment and migration. If you're discovering more than 10,000 servers, set up multiple accounts as it is required there is no overlap among servers discovered from any two appliances in a project.
+- **Plan accounts for discovery**: The Azure Migrate appliance uses an account with access to vCenter Server in order to discover servers for assessment and migration. If you're discovering more than 10,000 servers, set up multiple accounts as it is necessary that there is no overlap among servers discovered from any two appliances in a project.
> [!NOTE] > If you are setting up multiple appliances, ensure there is no overlap among the servers on the vCenter accounts provided. A discovery with such an overlap is an unsupported scenario. If a server is discovered by more than one appliance, this results in duplicates in discovery and in issues while enabling replication for the server using the Azure portal in Server Migration.
Use the limits summarized in this table for planning.
**Planning** | **Limits** | **Azure Migrate projects** | Assess up to 35,000 servers in a project.
-**Azure Migrate appliance** | An appliance can discover up to 10,000 servers on a vCenter Server.<br/> An appliance can connect up to 10 vCenter Servers.<br/> An appliance can only be associated with a single Azure Migrate project.<br/> Any number of appliances can be associated with a single Azure Migrate project. <br/><br/>
+**Azure Migrate appliance** | An appliance can discover up to 10,000 servers on a vCenter Server.<br/> An appliance can connect to upto 10 vCenter Servers.<br/> An appliance can only be associated with a single Azure Migrate project.<br/> Any number of appliances can be associated with a single Azure Migrate project. <br/><br/>
**Group** | You can add up to 35,000 servers in a single group. **Azure Migrate assessment** | You can assess up to 35,000 servers in a single assessment.
With these limits in mind, here are some example deployments:
**vCenter server** | **Servers to be discovered** | **Recommendation** | **Action** | | | | | One | < 10,000 | One Azure Migrate project.<br><br> One appliance can discover up to 10,000 servers running on up to 10 vCenter Servers.<br><br> Provide one or more vCenter Server accounts for discovery. | Set up an appliance to discover servers from up to 10 vCenter Servers mapped to one or more vCenter Server accounts, scoped to discover less than 10,000 servers. <br> <br>You can analyze dependencies on servers across vCenter Servers discovered from the same appliance.|
-One | > 10,000 | One Azure Migrate project.<br><br> One appliance can discover up to 10,000 servers running on up to 10 vCenter Servers.<br><br> Provide one or more vCenter Server accounts for discovery. | Set up an appliance to connect up to 10 vCenter Servers mapped to one or more vCenter Server accounts, scoped to discover less than 10,000 servers. You need to deploy additional appliances for every 10,000 servers.<br><br> If the number of servers is greater than 10,000, set up additional appliances with the vCenter Server accounts scoped accordingly. <br><br> You can analyze dependencies on servers across vCenter Servers discovered from the same appliance.<br> <br> Ensure there is no overlap among the servers on the vCenter accounts provided. A discovery with such an overlap is an unsupported scenario. If a server is discovered by more than one appliance, this results in a duplicates in discovery and in issues while enabling replication for the server using the Azure portal in Server Migration. |
+One | > 10,000 | One Azure Migrate project.<br><br> One appliance can discover up to 10,000 servers running on up to 10 vCenter Servers.<br><br> Provide one or more vCenter Server accounts for discovery. | Set up an appliance to connect up to 10 vCenter Servers mapped to one or more vCenter Server accounts, scoped to discover less than 10,000 servers. You need to deploy additional appliances for every 10,000 servers.<br><br> If the number of servers is greater than 10,000, set up additional appliances with the vCenter Server accounts scoped accordingly. <br><br> You can analyze dependencies on servers across vCenter Servers discovered from the same appliance.<br> <br> Ensure there is no overlap among the servers on the vCenter accounts provided. A discovery with such an overlap is an unsupported scenario. If a server is discovered by more than one appliance, this results in a duplicate in discovery and in issues while enabling replication for the server using the Azure portal in Server Migration. |
Multiple | < 10,000 | One Azure Migrate project.<br><br> One appliance can discover up to 10,000 servers running on up to 10 vCenter Servers.<br><br> Provide one or more vCenter Server accounts for discovery. | Set up an appliance to connect up to 10 vCenter Servers mapped to one or more vCenter Server accounts, scoped to discover less than 10,000 servers. <br><br> You need to deploy additional appliances for every 10 vCenter Servers.<br> <br> You can analyze dependencies on servers across vCenter Servers discovered from the same appliance. |
-Multiple | > 10,000 | One Azure Migrate project.<br><br> One appliance can discover up to 10,000 servers running on up to 10 vCenter Servers.<br><br> Provide one or more vCenter Server accounts for discovery. | Set up an appliance to discover VMs from up to 10 vCenter Servers mapped to one or more vCenter Server accounts, scoped to discover less than 10,000 servers. You need to deploy additional appliances for every 10 vCenter Servers. <br><br> If the number of servers is greater than 10,000, set up additional appliances with the vCenter Server accounts scoped accordingly. <br><br> You can analyze dependencies on servers across vCenter Servers discovered from the same appliance. <br><br> Ensure there is no overlap among the servers on the vCenter accounts provided. A discovery with such an overlap is an unsupported scenario. If a server is discovered by more than one appliance, this results in a duplicates in discovery and in issues while enabling replication for the server using the Azure portal in Server Migration. |
+Multiple | > 10,000 | One Azure Migrate project.<br><br> One appliance can discover up to 10,000 servers running on up to 10 vCenter Servers.<br><br> Provide one or more vCenter Server accounts for discovery. | Set up an appliance to discover VMs from up to 10 vCenter Servers mapped to one or more vCenter Server accounts, scoped to discover less than 10,000 servers. You need to deploy additional appliances for every 10 vCenter Servers. <br><br> If the number of servers is greater than 10,000, set up additional appliances with the vCenter Server accounts scoped accordingly. <br><br> You can analyze dependencies on servers across vCenter Servers discovered from the same appliance. <br><br> Ensure there is no overlap among the servers on the vCenter accounts provided. A discovery with such an overlap is an unsupported scenario. If a server is discovered by more than one appliance, this results in a duplicate in discovery and in issues while enabling replication for the server using the Azure portal in Server Migration. |
Multiple | > 10,000 | One Azure Migrate project.<br><br> One appliance can disco
If you're planning for a multi-tenant environment, you can scope the discovery on the vCenter Server. -- You can set the appliance discovery scope to a vCenter Server datacenters, clusters or folder of clusters, hosts or folder of hosts, or individual servers.
+- You can set the appliance discovery scope to a vCenter Server data centers, clusters, or folder of clusters, hosts or folder of hosts, or individual servers.
- If your environment is shared across tenants and you want to discover each tenant separately, you can scope access to the vCenter account that the appliance uses for discovery. - You may want to scope by VM folders if the tenants share hosts. Azure Migrate can't discover servers if the vCenter account has access granted at the vCenter VM folder level. If you are looking to scope your discovery by VM folders, you can do so by ensuring the vCenter account has read-only access assigned at a server level. [Learn more](set-discovery-scope.md).
In accordance with your planning requirements, do the following:
1. Create an Azure Migrate projects. 2. Add the Azure Migrate Discovery and assessment tool to the projects.
-[Learn more](./create-manage-projects.md)
+[Learn more](./create-manage-projects.md) about creating a project.
## Create and review an assessment
migrate Tutorial App Containerization Java App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md
Title: Azure App Containerization Java; Containerization and migration of Java web applications to Azure App Service.
+ Title: Containerization and migration of Java web applications to Azure App Service.
description: Tutorial:Containerize & migrate Java web applications to Azure App Service. Previously updated : 3/2/2021 Last updated : 5/2/2022 # Java web app containerization and migration to Azure App Service In this article, you'll learn how to containerize Java web applications (running on Apache Tomcat) and migrate them to [Azure App Service](https://azure.microsoft.com/services/app-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure App Service.
-The Azure Migrate: App Containerization tool currently supports -
+The Azure Migrate: App Containerization tool currently supports:
- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on App Service.-- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-app-containerization-java-kubernetes.md)-- Containerizing ASP.NET apps and deploying them on Windows containers on AKS. [Learn more](./tutorial-app-containerization-aspnet-kubernetes.md)-- Containerizing ASP.NET apps and deploying them on Windows containers on App Service. [Learn more](./tutorial-app-containerization-aspnet-app-service.md)
+- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-app-containerization-java-kubernetes.md).
+- Containerizing ASP.NET apps and deploying them on Windows containers on AKS. [Learn more](./tutorial-app-containerization-aspnet-kubernetes.md).
+- Containerizing ASP.NET apps and deploying them on Windows containers on App Service. [Learn more](./tutorial-app-containerization-aspnet-app-service.md).
-The Azure Migrate: App Containerization tool helps you to -
+The Azure Migrate: App Containerization tool helps you to:
- **Discover your application**: The tool remotely connects to the application servers running your Java web application (running on Apache Tomcat) and discovers the application components. The tool creates a Dockerfile that can be used to create a container image for the application. - **Build the container image**: You can inspect and further customize the Dockerfile as per your application requirements and use that to build your application container image. The application container image is pushed to an Azure Container Registry you specify. - **Deploy to Azure App Service**: The tool then generates the deployment files needed to deploy the containerized application to Azure App Service. > [!NOTE]
-> The Azure Migrate: App Containerization tool helps you discover specific application types (ASP.NET and Java web apps on Apache Tomcat) and their components on an application server. To discover servers and the inventory of apps, roles, and features running on on-premises machines, use Azure Migrate: Discovery and assessment capability. [Learn more](./tutorial-discover-vmware.md)
+> The Azure Migrate: App Containerization tool helps you discover specific application types (ASP.NET and Java web apps on Apache Tomcat) and their components on an application server. To discover servers and the inventory of apps, roles, and features running on on-premises machines, use Azure Migrate: Discovery and assessment capability. [Learn more](./tutorial-discover-vmware.md).
While all applications won't benefit from a straight shift to containers without significant rearchitecting, some of the benefits of moving existing apps to containers without rewriting include:
Before you begin this tutorial, you should:
**Requirement** | **Details** | **Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the Java web applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy.
-**Application servers** | - Enable Secure Shell (SSH) connection on port 22 on the server(s) running the Java application(s) to be containerized. <br/>
-**Java web application** | The tool currently supports <br/><br/> - Applications running on Tomcat 8 or later.<br/> - Application servers on Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7. <br/> - Applications using Java version 7 or later. <br/><br/> The tool currently doesn't support <br/><br/> - Applications servers running multiple Tomcat instances <br/>
+**Application servers** | Enable Secure Shell (SSH) connection on port 22 on the server(s) running the Java application(s) to be containerized. <br/>
+**Java web application** | The tool currently supports: <br/><br/> - Applications running on Tomcat 8 or later.<br/> - Application servers on Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7. <br/> - Applications using Java version 7 or later. <br/><br/> The tool currently doesn't support: <br/><br/> - Application servers running multiple Tomcat instances. <br/>
## Prepare an Azure user account
Before you begin this tutorial, you should:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. Once your subscription is set up, you'll need an Azure user account with:-- Owner permissions on the Azure subscription-- Permissions to register Azure Active Directory apps
+- Owner permissions on the Azure subscription.
+- Permissions to register Azure Active Directory apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows:
If you just created a free Azure account, you're the owner of your subscription.
![Search for a user account to check access and assign a role.](./media/tutorial-discover-vmware/azure-account-access.png)
-6. In **Add role assignment**, select the Owner role, and select the account (azmigrateuser in our example). Then click **Save**.
+6. In **Add role assignment**, select the Owner role, and select the account (azmigrateuser in our example). Click **Save**.
![Opens the Add Role assignment page to assign a role to the account.](./media/tutorial-discover-vmware/assign-role.png)
-7. Your Azure account also needs **permissions to register Azure Active Directory apps.**
-8. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
-9. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
+ Your Azure account also needs **permissions to register Azure Active Directory apps.**
+8. In the Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
+9. In **User settings**, verify if Azure AD users can register applications (set to **Yes** by default).
- ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-vmware/register-apps.png)
+ ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-vmware/register-apps.png)
-10. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+10. In case the 'App registrations' setting is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Download and install Azure Migrate: App Containerization tool
If you just created a free Azure account, you're the owner of your subscription.
## Launch the App Containerization tool
-1. Open a browser on any machine that can connect to the Windows machine running the App Containerization tool, and open the tool URL: **https://*machine name or IP address*: 44369**.
+1. Open a browser on any machine that can connect to the Windows machine running the App Containerization tool and open the tool URL: **https://*machine name or IP address*: 44369**.
Alternately, you can open the app from the desktop by selecting the app shortcut.
-2. If you see a warning stating that says your connection isnΓÇÖt private, click Advanced and choose to proceed to the website. This warning appears as the web interface uses a self-signed TLS/SSL certificate.
+2. If you see a warning stating that your connection isnΓÇÖt private, click Advanced and choose to proceed to the website. This warning appears as the web interface uses a self-signed TLS/SSL certificate.
3. At the sign-in screen, use the local administrator account on the machine to sign in. 4. Select **Java web apps on Tomcat** as the type of application you want to containerize. 5. To specify target Azure service, select **Containers on Azure App Service**. ![Default load-up for App Containerization tool.](./media/tutorial-containerize-apps-aks/tool-home.png) ### Complete tool pre-requisites
-1. Accept the **license terms**, and read the third-party information.
+1. Accept the **license terms** and read the third-party information.
6. In the tool web app > **Set up prerequisites**, do the following steps: - **Connectivity**: The tool checks that the Windows machine has internet access. If the machine uses a proxy:
- - Click on **Set up proxy** to specify the proxy address (in the form IP address or FQDN) and listening port.
+ - Click **Set up proxy** to specify the proxy address (in the form IP address or FQDN) and listening port.
- Specify credentials if the proxy needs authentication. - Only HTTP proxy is supported.
- - If you've added proxy details or disabled the proxy and/or authentication, click on **Save** to trigger connectivity check again.
+ - If you've added proxy details or disabled the proxy and/or authentication, click **Save** to trigger connectivity check again.
- **Install updates**: The tool will automatically check for latest updates and install them. You can also manually install the latest version of the tool from [here](https://go.microsoft.com/fwlink/?linkid=2134571). - **Enable Secure Shell (SSH)**: The tool will inform you to ensure that Secure Shell (SSH) is enabled on the application servers running the Java web applications to be containerized.
If you just created a free Azure account, you're the owner of your subscription.
Click **Sign in** to log in to your Azure account.
-1. You'll need a device code to authenticate with Azure. Clicking on sign in will open a modal with the device code.
-2. Click on **Copy code & sign in** to copy the device code and open an Azure sign in prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser.
+1. You'll need a device code to authenticate with Azure. Clicking **Sign in** will open a modal with the device code.
+2. Click **Copy code & sign in** to copy the device code and open an Azure sign-in prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser.
![Modal showing device code.](./media/tutorial-containerize-apps-aks/login-modal.png)
The App Containerization helper tool connects remotely to the application server
1. Specify the **IP address/FQDN and the credentials** of the server running the Java web application that should be used to remotely connect to the server for application discovery. - The credentials provided must be for a root account (Linux) on the application server. - For domain accounts (the user must be an administrator on the application server), prefix the username with the domain name in the format *<domain\username>*.
- - You can run application discovery for upto five servers at a time.
+ - You can run application discovery for up to five servers at a time.
2. Click **Validate** to verify that the application server is reachable from the machine running the tool and that the credentials are valid. Upon successful validation, the status column will show the status as **Mapped**.
The App Containerization helper tool connects remotely to the application server
### Parameterize application configurations Parameterizing the configuration makes it available as a deployment time parameter. This allows you to configure this setting while deploying the application as opposed to having it hard-coded to a specific value in the container image. For example, this option is useful for parameters like database connection strings.
-1. Click **app configurations** to review detected configurations.
+1. Click **App configurations** to review detected configurations.
2. Select the checkbox to parameterize the detected application configurations. 3. Click **Apply** after selecting the configurations to parameterize.
Parameterizing the configuration makes it available as a deployment time paramet
![Screenshot for app ACR selection.](./media/tutorial-containerize-apps-aks/build-java-app.png) > [!NOTE]
-> Only Azure container registries with admin user enabled are displayed. The admin account is currently required for deploying an image from an Azure container registry to Azure App Service. [Learn more](../container-registry/container-registry-authentication.md#admin-account)
+> Only Azure container registries with admin user enabled are displayed. The admin account is currently required for deploying an image from an Azure container registry to Azure App Service. [Learn more](../container-registry/container-registry-authentication.md#admin-account).
2. **Review the Dockerfile**: The Dockerfile needed to build the container images for each selected application are generated at the beginning of the build step. Click **Review** to review the Dockerfile. You can also add any necessary customizations to the Dockerfile in the review step and save the changes before starting the build process. 3. **Configure Application Insights**: You can enable monitoring for your Java apps running on App Service without instrumenting your code. The tool will install the Java standalone agent as part of the container image. Once configured during deployment, the Java agent will automatically collect a multitude of requests, dependencies, logs, and metrics for your application that can be used for monitoring with Application Insights. This option is enabled by default for all Java applications.
-4. **Trigger build process**: Select the applications to build images for and click **Build**. Clicking build will start the container image build for each application. The tool keeps monitoring the build status continuously and will let you proceed to the next step upon successful completion of the build.
+4. **Trigger build process**: Select the applications to build images for and click **Build**. Clicking **Build** will start the container image build for each application. The tool keeps monitoring the build status continuously and will let you proceed to the next step upon successful completion of the build.
-5. **Track build status**: You can also monitor progress of the build step by clicking the **Build in Progress** link under the status column. The link takes a couple of minutes to be active after you've triggered the build process.
+5. **Track build status**: You can also monitor progress of the build step by clicking the **Build in Progress** link under the **Build status** column. The link takes a couple of minutes to be active after you've triggered the build process.
-6. Once the build is completed, click **Continue** to specify deployment settings.
+6. Once the build is completed, click **Continue** to specify the deployment settings.
![Screenshot for app container image build completion.](./media/tutorial-containerize-apps-aks/build-java-app-completed.png)
Once the container image is built, the next step is to deploy the application as
1. **Select the Azure App Service plan**: Specify the Azure App Service plan that the application should use.
- - If you donΓÇÖt have an App Service plan or would like to create a new App Service plan to use, you can choose to create on from the tool by clicking **Create new App Service plan**.
+ - If you donΓÇÖt have an App Service plan or would like to create a new App Service plan to use, you can choose to create one from the tool by clicking **Create new App Service plan**.
- Click **Continue** after selecting the App Service plan.
-2. **Specify secret store and monitoring workspace**: If you had opted to parameterize application configurations, then specify the secret store to be used for the application. You can choose Azure Key Vault or App Service application settings for managing your application secrets. [Learn more](../app-service/configure-common.md#configure-connection-strings)
+2. **Specify secret store and monitoring workspace**: If you had opted to parameterize application configurations, then specify the secret store to be used for the application. You can choose Azure Key Vault or App Service application settings for managing your application secrets. [Learn more](../app-service/configure-common.md#configure-connection-strings).
- If you've selected App Service application settings for managing secrets, then click **Continue**. - If you'd like to use an Azure Key Vault for managing your application secrets, then specify the Azure Key Vault that you'd want to use.
- - If you donΓÇÖt have an Azure Key Vault or would like to create a new Key Vault, you can choose to create on from the tool by clicking **Create new**.
+ - If you donΓÇÖt have an Azure Key Vault or would like to create a new Key Vault, you can choose to create one from the tool by clicking **Create new**.
- The tool will automatically assign the necessary permissions for managing secrets through the Key Vault. - **Monitoring workspace**: If you'd selected to enabled monitoring with Application Insights, then specify the Application Insights resource that you'd want to use. This option won't be visible if you had disabled monitoring integration. - If you donΓÇÖt have an Application Insights resource or would like to create a new resource, you can choose to create on from the tool by clicking **Create new**.
Once the container image is built, the next step is to deploy the application as
## Troubleshoot issues
-To troubleshoot any issues with the tool, you can look at the log files on the Windows machine running the App Containerization tool. Tool log files are located at *C:\ProgramData\Microsoft Azure Migrate App Containerization\Logs* folder.
+To troubleshoot any issues with the tool, you can look at the log files on the Windows machine running the App Containerization tool. Tool log files are available in the *C:\ProgramData\Microsoft Azure Migrate App Containerization\Logs* folder.
## Next steps
migrate Tutorial Assess Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-gcp.md
ms. Previously updated : 09/14/2020 Last updated : 05/2/2022 #Customer intent: As a server admin, I want to assess my GCP instances in preparation for migration to Azure.
In this tutorial, you learn how to:
- Run an assessment based on performance data. > [!NOTE]
-> Tutorials show the quickest path for trying out a scenario, and use default options where possible.
+> Tutorials show the quickest path for trying out a scenario and using default options where possible.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Decide which assessment to run
-Decide whether you want to run an assessment using sizing criteria based on server configuration data/metadata that's collected as-is on-premises, or based on performance data.
+Decide whether you want to run an assessment using sizing criteria based on server configuration data/metadata that's collected as-is on-premises or based on performance data.
**Assessment** | **Details** | **Recommendation** | |
Decide whether you want to run an assessment using sizing criteria based on serv
Run an assessment as follows:
-1. On the **Overview** page > **Windows, Linux and SQL Server**, click **Assess and migrate servers**.
+1. Go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**.
![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vm/assess.png)
Run an assessment as follows:
:::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-name.png" alt-text="Location of the edit button to review assessment properties":::
-1. In **Assessment properties** > **Target Properties**:
+1. In **Assessment properties** > **Target Properties**, do the following:
- In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
- - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government)
+ - In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government).
- In **Storage type**,
- - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput.
+ - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on the disk IOPS and throughput.
- Alternatively, select the storage type you want to use for VM when you migrate it.
- - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
+ - In **Reserved Instances**, specify whether you want to use reserved instances for the VM when you migrate it.
- If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**.
- - [Learn more](https://aka.ms/azurereservedinstances).
+ - [Learn more](https://aka.ms/azurereservedinstances) about VM resrved instances.
1. In **VM Size**:
- - In **Sizing criterion**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data:
- - In **Performance history**, indicate the data duration on which you want to base the assessment
+ - In **Sizing criteria**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data:
+ - In **Performance history**, indicate the data duration on which you want to base the assessment.
- In **Percentile utilization**, specify the percentile value you want to use for the performance sample. - In **VM Series**, specify the Azure VM series you want to consider. - If you're using performance-based assessment, Azure Migrate suggests a value for you.
- - Tweak settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series.
- - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
+ - Tweak the settings as needed. For example, if you don't have a production environment that needs A-series VMs in Azure, you can exclude A-series from the list of series.
+ - In **Comfort factor**, indicate the buffer you want to use during the assessment. This accounts for issues like seasonal usage, short performance history, and likely increases during future usage. For example, if you use a comfort factor of two:
**Component** | **Effective utilization** | **Add comfort factor (2.0)** | |
Run an assessment as follows:
- In **Offer**, specify the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) if you're enrolled. The assessment estimates the cost for that offer. - In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
- - In **VM Uptime**, specify the duration (days per month/hour per day) that VMs will run.
+ - In **VM Uptime**, specify the duration (days per month/hour per day) for which the VMs will run.
- This is useful for Azure VMs that won't run continuously. - Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day.
Run an assessment as follows:
![Assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
-1. In **Assess Servers** > click **Next**.
+1. In **Assess Servers**, click **Next**.
-1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
+1. In **Select servers to assess** > **Assessment name**, specify a name for the assessment.
-1. In **Select or create a group** > select **Create New** and specify a group name.
+1. In **Select or create a group** > **Create New** and specify a group name.
-1. Select the appliance, and select the VMs you want to add to the group. Then click **Next**.
+1. Select the appliance, and select the VMs you want to add to the group. Click **Next**.
1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment. 1. After the assessment is created, view it in **Servers** > **Azure Migrate: Discovery and assessment** > **Assessments**.
-1. Click **Export assessment**, to download it as an Excel file.
+1. Click **Export assessment** to download it as an Excel file.
> [!NOTE] > For performance-based assessments, we recommend that you wait at least a day after starting discovery before you create an assessment. This provides time to collect performance data with higher confidence. Ideally, after you start discovery, wait for the performance duration you specify (day/week/month) for a high-confidence rating.
An assessment describes:
To view an assessment:
-1. In **Windows, Linux and SQL Server** > **Azure Migrate: Discovery and assessment**, click the number next to **Assessments**.
+1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, click the number next to **Assessments**.
2. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only): ![Assessment summary](./media/tutorial-assess-gcp/assessment-summary.png)
-3. Review the assessment summary. You can also edit the assessment properties, or recalculate the assessment.
+3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment.
### Review readiness
To view an assessment:
- **Not ready for Azure**: Shows issues and suggested remediation. - **Readiness unknown**: Used when Azure Migrate can't assess readiness, because of data availability issues.
-3. Select an **Azure readiness** status. You can view VM readiness details. You can also drill down to see VM details, including compute, storage, and network settings.
+3. Select an **Azure readiness** status. You can view the VM readiness details. You can also drill down to see VM details, including compute, storage, and network settings.
### Review cost estimates
migrate Tutorial Assess Vmware Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vmware-solution.md
ms. Previously updated : 09/14/2020 Last updated : 5/2/2022 #Customer intent: As a VMware VM admin, I want to assess my VMware VMs in preparation for migration to Azure VMware Solution (AVS)
As part of your migration journey to Azure, you assess your on-premises workload
This article shows you how to assess discovered VMware virtual machines/servers for migration to Azure VMware Solution (AVS), using the Azure Migrate. AVS is a managed service that allows you to run the VMware platform in Azure.
-In this tutorial, you learn how to:
+In this tutorial, you will learn how to:
> [!div class="checklist"] - Run an assessment based on server metadata and configuration information. - Run an assessment based on performance data. > [!NOTE]
-> Tutorials show the quickest path for trying out a scenario, and use default options where possible.
+> Tutorials show the quickest path for trying out a scenario and using default options where possible.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
Decide whether you want to run an assessment using sizing criteria based on serv
Run an assessment as follows:
-1. On the **Overview** page > **Servers, databases and web apps**, click **Assess and migrate servers**.
+1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**.
1. In **Azure Migrate: Discovery and assessment**, click **Assess**.
Run an assessment as follows:
- In **Target location**, specify the Azure region to which you want to migrate. - Size and cost recommendations are based on the location that you specify. - The **Storage type** is defaulted to **vSAN**. This is the default storage type for an AVS private cloud.
- - In **Reserved Instances**, specify whether you want to use reserve instances for Azure VMware Solution nodes when you migrate your VMs.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**
- - [Learn more](../azure-vmware/reserved-instance.md)
+ - In **Reserved Instance**, specify whether you want to use reserve instances for Azure VMware Solution nodes when you migrate your VMs.
+ - If you decide to use a reserved instance, you can't specify **Discount (%)**.
+ - [Learn more](../azure-vmware/reserved-instance.md) about reserved instances.
1. In **VM Size**: - The **Node type** is defaulted to **AV36**. Azure Migrate recommends the node of nodes needed to migrate the servers to AVS. - In **FTT setting, RAID level**, select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises server disk requirement, determines the total vSAN storage required in AVS. - In **CPU Oversubscription**, specify the ratio of virtual cores associated with one physical core in the AVS node. Oversubscription of greater than 4:1 might cause performance degradation, but can be used for web server type workloads. - In **Memory overcommit factor**, specify the ratio of memory over commit on the cluster. A value of 1 represents 100% memory use, 0.5 for example is 50%, and 2 would be using 200% of available memory. You can only add values from 0.5 to 10 up to one decimal place.
- - In **Dedupe and compression factor**, specify the anticipated dedupe and compression factor for your workloads. Actual value can be obtained from on-premises vSAN or storage config and this may vary by workload. A value of 3 would mean 3x so for 300GB disk only 100GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place.
+ - In **Dedupe and compression factor**, specify the anticipated dedupe and compression factor for your workloads. The actual value can be obtained from on-premises vSAN or storage config and this may vary by workload. A value of 3 would mean 3x so for a 300GB disk only 100GB storage would be used. A value of 1 would mean no dedupe or compression. You can only add values from 1 to 10 up to one decimal place.
1. In **Node Size**:
- - In **Sizing criterion**, select if you want to base the assessment on static metadata, or on performance-based data. If you use performance data:
+ - In **Sizing criteria**, select if you want to base the assessment on static metadata, or on performance-based data. If you use performance data:
- In **Performance history**, indicate the data duration on which you want to base the assessment - In **Percentile utilization**, specify the percentile value you want to use for the performance sample. - In **Comfort factor**, indicate the buffer you want to use during assessment. This accounts for issues like seasonal usage, short performance history, and likely increases in future usage. For example, if you use a comfort factor of two:
Run an assessment as follows:
Memory | 8 GB | 16 GB 1. In **Pricing**:
- - In **Offer**, [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) you're enrolled in is displayed. The Assessment estimates the cost for that offer.
+ - In **Offer/Licencing program**, [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) you're enrolled in is displayed. The assessment estimates the cost for that offer.
- In **Currency**, select the billing currency for your account. - In **Discount (%)**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
Run an assessment as follows:
:::image type="content" source="./media/tutorial-assess-vmware-azure-vmware-solution/assess-group.png" alt-text="Add servers to a group":::
-1. Select the appliance, and select the servers you want to add to the group. Then click **Next**.
+1. Select the appliance and select the servers that you want to add to the group. Then click **Next**.
1. In **Review + create assessment**, review the assessment details, and click **Create Assessment** to create the group and run the assessment.
An AVS assessment describes:
- **Azure VMware Solution (AVS) readiness**: Whether the on-premises servers are suitable for migration to Azure VMware Solution (AVS). - **Number of Azure VMware Solution nodes**: Estimated number of Azure VMware Solution nodes required to run the servers. - **Utilization across AVS nodes**: Projected CPU, memory, and storage utilization across all nodes.
- - Utilization includes up front factoring in the following cluster management overheads such as the vCenter Server, NSX Manager (large),
+ - Utilization includes upfront factoring in the cluster management overheads such as the vCenter Server, NSX Manager (large),
NSX Edge, if HCX is deployed also the HCX Manager and IX appliance consuming ~ 44vCPU (11 CPU), 75GB of RAM and 722GB of storage before compression and deduplication. - Limiting factor determines the number of hosts/nodes required to accommodate the resources. - **Monthly cost estimation**: The estimated monthly costs for all Azure VMware Solution (AVS) nodes running the on-premises VMs.
-You can click on **Sizing assumptions** to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties, or recalculate the assessment.
+You can click on **Sizing assumptions** to understand the assumptions that went in node sizing and resource utilization calculations. You can also edit the assessment properties or recalculate the assessment.
## View an assessment To view an assessment:
-1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, click the number next to ** Azure VMware Solution**.
+1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, click the number next to **Azure VMware Solution**.
1. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only):
To view an assessment:
2. In **Azure readiness**, review the readiness status. - **Ready for AVS**: The server can be migrated as-is to Azure (AVS) without any changes. It will start in AVS with full AVS support.
- - **Ready with conditions**: There might be some compatibility issues example internet protocol or deprecated OS in VMware and need to be remediated before migrating to Azure VMware Solution. To fix any readiness problems, follow the remediation guidance the assessment suggests.
- - **Not ready for AVS**: The VM will not start in AVS. For example, if the on-premises VMware VM has an external device attached such as a cd-rom the VMware VMotion operation will fail (if using VMware VMotion).
+ - **Ready with conditions**: There might be some compatibility issues, for example, internet protocol or deprecated OS in VMware and need to be remediated before migrating to Azure VMware Solution. To fix any readiness problems, follow the remediation guidance that the assessment suggests.
+ - **Not ready for AVS**: The VM will not start in AVS. For example, if the on-premises VMware VM has an external device attached such as a CD-ROM the VMware VMotion operation will fail (if using VMware VMotion).
- **Readiness unknown**: Azure Migrate couldn't determine the readiness of the server because of insufficient metadata collected from the on-premises environment. 3. Review the suggested tool.
- - VMware HCX or Enterprise: For VMware servers, VMware Hybrid Cloud Extension (HCX) solution is the suggested migration tool to migrate your on-premises workload to your Azure VMware Solution (AVS) private cloud. Learn More.
+ - VMware HCX or Enterprise: For VMware servers, the VMware Hybrid Cloud Extension (HCX) solution is the suggested migration tool to migrate your on-premises workload to your Azure VMware Solution (AVS) private cloud.
- Unknown: For servers imported via a CSV file, the default migration tool is unknown. Though for VMware servers, it is suggested to use the VMware Hybrid Cloud Extension (HCX) solution.
-4. Click on an AVS readiness status. You can view server readiness details, and drill down to see server details, including compute, storage, and network settings.
+4. Click on an AVS readiness status. You can view the server readiness details, and drill down to see server details, including compute, storage, and network settings.
### Review cost estimates
The assessment summary shows the estimated compute and storage cost of running s
- Cost estimates are based on the number of AVS nodes required considering the resource requirements of all the servers in total. - As the pricing is per node, the total cost does not have compute cost and storage cost distribution.
- - The cost estimation is for running the on-premises servers in AVS. AVS assessment doesn't consider PaaS or SaaS costs.
+ - The cost estimation is for running the on-premises servers in AVS. The AVS assessment doesn't consider PaaS or SaaS costs.
2. Review monthly storage estimates. The view shows the aggregated storage costs for the assessed group, split over different types of storage disks. 3. You can drill down to see cost details for specific servers.
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
In this tutorial, you learn how to:
> * Start continuous discovery. > [!NOTE]
-> Tutorials show the quickest path for trying out a scenario, and use default options.
+> Tutorials show the quickest path for trying out a scenario and using default options.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
If you just created a free Azure account, you're the owner of your subscription.
![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
-1. To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps.**
+ To register the appliance, your Azure account needs **permissions to register Azure Active Directory apps**.
-1. In Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
+1. In the Azure portal, navigate to **Azure Active Directory** > **Users** > **User Settings**.
1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default). ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-gcp/register-apps.png)
-1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+1. In case the 'App registrations' setting is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
## Prepare GCP instances
Set up a new project.
![Page showing Server Assessment tool added by default.](./media/tutorial-discover-gcp/added-tool.png) > [!NOTE]
-> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project).
## Set up the appliance
The Azure Migrate appliance is a lightweight appliance, used by Azure Migrate: D
[Learn more](migrate-appliance.md) about the Azure Migrate appliance.
-To set up the appliance you:
+To set up the appliance, you:
1. Provide an appliance name and generate a project key in the portal. 1. Download a zipped file with Azure Migrate installer script from the Azure portal. 1. Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges. 1. Execute the PowerShell script to launch the appliance web application.
-1. Configure the appliance for the first time, and register it with the project using the project key.
+1. Configure the appliance for the first time and register it with the project using the project key.
### 1. Generate the project key 1. In **Migration Goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
-2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
+2. In **Discover servers** > **Are your servers virtualized?** > **Physical or other (AWS, GCP, Xen, etc.)**.
3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of your GCP virtual servers. The name should be alphanumeric with 14 characters or fewer.
-4. Click on **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
+4. Click **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
5. After the successful creation of the Azure resources, a **project key** is generated. 6. Copy the key as you will need it to complete the registration of the appliance during its configuration. ### 2. Download the installer script
-In **2: Download Azure Migrate appliance**, click on **Download**.
+In **2: Download Azure Migrate appliance**, click **Download**.
### Verify security
-Check that the zipped file is secure, before you deploy it.
+Check that the zipped file is secure before you deploy it.
1. On the machine to which you downloaded the file, open an administrator command window. 2. Run the following command to generate the hash for the zipped file:
Make sure that the appliance can connect to Azure URLs for [public](migrate-appl
Set up the appliance for the first time.
-1. Open a browser on any machine that can connect to the appliance, and open the URL of the appliance web app: **https://*appliance name or IP address*: 44368**.
+1. Open a browser on any machine that can connect to the appliance and open the URL of the appliance web app: **https://*appliance name or IP address*: 44368**.
Alternately, you can open the app from the desktop by clicking the app shortcut.
-2. Accept the **license terms**, and read the third-party information.
+2. Accept the **license terms** and read the third-party information.
#### Set up prerequisites and register the appliance
Now, connect from the appliance to the GCP servers to be discovered, and start t
- If you choose **Add multiple items**, you can add multiple records at once by specifying server **IP address/FQDN** with the friendly name for credentials in the text box. Verify** the added records and click on **Save**. - If you choose **Import CSV** _(selected by default)_, you can download a CSV template file, populate the file with the server **IP address/FQDN** and friendly name for credentials. You then import the file into the appliance, **verify** the records in the file and click on **Save**.
-5. On clicking Save, appliance will try validating the connection to the servers added and show the **Validation status** in the table against each server.
+5. On clicking **Save**, the appliance will try validating the connection to the servers added and show the **Validation status** in the table against each server.
- If validation fails for a server, review the error by clicking on **Validation failed** in the Status column of the table. Fix the issue, and validate again. - To remove a server, click on **Delete**. 6. You can **revalidate** the connectivity to servers anytime before starting the discovery.
Now, connect from the appliance to the GCP servers to be discovered, and start t
### Start discovery
-Click on **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
+Click **Start discovery**, to kick off discovery of the successfully validated servers. After the discovery has been successfully initiated, you can check the discovery status against each server in the table.
## How discovery works
Click on **Start discovery**, to kick off discovery of the successfully validate
After discovery finishes, you can verify that the servers appear in the portal. 1. Open the Azure Migrate dashboard.
-2. In **Azure Migrate - Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
+2. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
## Next steps
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-aks.md
az network nic list --resource-group nodeResourceGroup -o table
## Use Azure premium fileshare
- Use [Azure premium fileshare](../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal) for persistent storage that can be used by one or many pods, and can be dynamically or statically provisioned. Azure premium fileshare gives you best performance for your application if you expect large number of I/O operations on the file storage. To learn more , see [how to enable Azure Files](../aks/azure-files-dynamic-pv.md).
+ Use [Azure premium fileshare](../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal) for persistent storage that can be used by one or many pods, and can be dynamically or statically provisioned. Azure premium fileshare gives you best performance for your application if you expect large number of I/O operations on the file storage. To learn more, see [how to enable Azure Files](../aks/azure-files-dynamic-pv.md).
## Next steps
-Create an AKS cluster [using the Azure CLI](./learn/quick-kubernetes-deploy-cli), [using Azure PowerShell](./learn/quick-kubernetes-deploy-powershell), or [using the Azure portal](./learn/quick-kubernetes-deploy-portal).
+Create an AKS cluster [using the Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
network-watcher Diagnose Vm Network Traffic Filtering Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-cli.md
network-watcher Previously updated : 01/07/2021 Last updated : 05/04/2022 #Customer intent: I need to diagnose a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM.
# Quickstart: Diagnose a virtual machine network traffic filter problem - Azure CLI
-In this quickstart you deploy a virtual machine (VM), and then check communications to an IP address and URL and from an IP address. You determine the cause of a communication failure and how you can resolve it.
+In this quickstart, you deploy a virtual machine (VM), and then check communications to an IP address and URL and from an IP address. You determine the cause of a communication failure and how you can resolve it.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
az network watcher test-ip-flow \
--out table ```
-After several seconds, the result returned informs you that access is allowed by a security rule named **AllowInternetOutbound**.
+After several seconds, the result returned informs you that access is allowed by a security rule named **DenyAllOutBound**.
Test outbound communication from the VM to 172.31.0.100:
az network watcher test-ip-flow \
--out table ```
-The result returned informs you that access is denied by a security rule named **DefaultOutboundDenyAll**.
+The result returned informs you that access is denied by a security rule named **DenyAllOutBound**.
Test inbound communication to the VM from 172.31.0.100:
az network watcher test-ip-flow \
--out table ```
-The result returned informs you that access is denied because of a security rule named **DefaultInboundDenyAll**. Now that you know which security rules are allowing or denying traffic to or from a VM, you can determine how to resolve the problems.
+The result returned informs you that access is denied because of a security rule named **DenyAllInBound**. Now that you know which security rules are allowing or denying traffic to or from a VM, you can determine how to resolve the problems.
## View details of a security rule
The returned output includes the following text for the **AllowInternetOutbound*
You can see in the previous output that **destinationAddressPrefix** is **Internet**. It's not clear how 13.107.21.200 relates to **Internet** though. You see several address prefixes listed under **expandedDestinationAddressPrefix**. One of the prefixes in the list is **12.0.0.0/6**, which encompasses the 12.0.0.1-15.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher priority (lower number) rules shown in the previous output that override this rule. To deny outbound communication to an IP address, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address.
-When you ran the `az network watcher test-ip-flow` command to test outbound communication to 172.131.0.100 in [Use IP flow verify](#use-ip-flow-verify), the output informed you that the **DefaultOutboundDenyAll** rule denied the communication. The **DefaultOutboundDenyAll** rule equates to the **DenyAllOutBound** rule listed in the following output from the `az network nic list-effective-nsg` command:
+When you ran the `az network watcher test-ip-flow` command to test outbound communication to 172.131.0.100 in [Use IP flow verify](#use-ip-flow-verify), the output informed you that the **DenyAllOutBound** rule denied the communication. The **DenyAllOutBound** rule equates to the **DenyAllOutBound** rule listed in the following output from the `az network nic list-effective-nsg` command:
```console {
When you ran the `az network watcher test-ip-flow` command to test outbound comm
The rule lists **0.0.0.0/0** as the **destinationAddressPrefix**. The rule denies the outbound communication to 172.131.0.100, because the address is not within the **destinationAddressPrefix** of any of the other outbound rules in the output from the `az network nic list-effective-nsg` command. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 at 172.131.0.100.
-When you ran the `az network watcher test-ip-flow` command in [Use IP flow verify](#use-ip-flow-verify) to test inbound communication from 172.131.0.100, the output informed you that the **DefaultInboundDenyAll** rule denied the communication. The **DefaultInboundDenyAll** rule equates to the **DenyAllInBound** rule listed in the following output from the `az network nic list-effective-nsg` command:
+When you ran the `az network watcher test-ip-flow` command in [Use IP flow verify](#use-ip-flow-verify) to test inbound communication from 172.131.0.100, the output informed you that the **DenyAllInBound** rule denied the communication. The **DenyAllInBound** rule equates to the **DenyAllInBound** rule listed in the following output from the `az network nic list-effective-nsg` command:
```console {
network-watcher Diagnose Vm Network Traffic Filtering Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md
documentationcenter: network-watcher
editor: Previously updated : 01/07/2021 Last updated : 05/03/2022 ms.assetid:
Test-AzNetworkWatcherIPFlow `
-RemotePort 80 ```
-The result returned informs you that access is denied by a security rule named **DefaultOutboundDenyAll**.
+The result returned informs you that access is denied by a security rule named **DenyAllOutBound**.
Test inbound communication to the VM from 172.31.0.100:
Test-AzNetworkWatcherIPFlow `
-RemotePort 60000 ```
-The result returned informs you that access is denied because of a security rule named **DefaultInboundDenyAll**. Now that you know which security rules are allowing or denying traffic to or from a VM, you can determine how to resolve the problems.
+The result returned informs you that access is denied because of a security rule named **DenyAllInBound**.
+
+ Now that you know which security rules are allowing or denying traffic to or from a VM, you can determine how to resolve the problems.
## View details of a security rule
The returned output includes the following text for the **AllowInternetOutbound*
You can see in the output that **DestinationAddressPrefix** is **Internet**. It's not clear how 13.107.21.200, the address you tested in [Use IP flow verify](#use-ip-flow-verify), relates to **Internet** though. You see several address prefixes listed under **ExpandedDestinationAddressPrefix**. One of the prefixes in the list is **12.0.0.0/6**, which encompasses the 12.0.0.1-15.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher **priority** (lower number) rules listed in the output returned by `Get-AzEffectiveNetworkSecurityGroup`, that override this rule. To deny outbound communication to 13.107.21.200, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address.
-When you ran the `Test-AzNetworkWatcherIPFlow` command to test outbound communication to 172.131.0.100 in [Use IP flow verify](#use-ip-flow-verify), the output informed you that the **DefaultOutboundDenyAll** rule denied the communication. The **DefaultOutboundDenyAll** rule equates to the **DenyAllOutBound** rule listed in the following output from the `Get-AzEffectiveNetworkSecurityGroup` command:
+When you ran the `Test-AzNetworkWatcherIPFlow` command to test outbound communication to 172.131.0.100 in [Use IP flow verify](#use-ip-flow-verify), the output informed you that the **DenyAllOutBound** rule denied the communication. The **DenyAllOutBound** rule equates to the **DenyAllOutBound** rule listed in the following output from the `Get-AzEffectiveNetworkSecurityGroup` command:
```powershell {
When you ran the `Test-AzNetworkWatcherIPFlow` command to test outbound communic
The rule lists **0.0.0.0/0** as the **DestinationAddressPrefix**. The rule denies the outbound communication to 172.131.0.100, because the address is not within the **DestinationAddressPrefix** of any of the other outbound rules in the output from the `Get-AzEffectiveNetworkSecurityGroup` command. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 at 172.131.0.100.
-When you ran the `Test-AzNetworkWatcherIPFlow` command to test inbound communication from 172.131.0.100 in [Use IP flow verify](#use-ip-flow-verify), the output informed you that the **DefaultInboundDenyAll** rule denied the communication. The **DefaultInboundDenyAll** rule equates to the **DenyAllInBound** rule listed in the following output from the `Get-AzEffectiveNetworkSecurityGroup` command:
+When you ran the `Test-AzNetworkWatcherIPFlow` command to test inbound communication from 172.131.0.100 in [Use IP flow verify](#use-ip-flow-verify), the output informed you that the **DenyAllInBound** rule denied the communication. The **DenyAllInBound** rule equates to the **DenyAllInBound** rule listed in the following output from the `Get-AzEffectiveNetworkSecurityGroup` command:
```powershell {
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/application-best-practices.md
description: Learn about best practices for building an app by using Azure Datab
--+++ Last updated 12/10/2020
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concept-reserved-pricing.md
description: Prepay for Azure Database for PostgreSQL compute resources with res
--+++ Last updated 10/06/2021
postgresql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-aks.md
description: Learn about connecting Azure Kubernetes Service (AKS) with Azure Da
--+++ Last updated 07/14/2020
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-certificate-rotation.md
description: Learn about the upcoming changes of root certificate changes that w
--+++ Last updated 09/02/2020
postgresql Concepts Data Access And Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-data-access-and-security-private-link.md
description: Learn how Private link works for Azure Database for PostgreSQL - Si
--+++ Last updated 03/10/2020
postgresql Concepts Data Encryption Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-data-encryption-postgresql.md
description: Azure Database for PostgreSQL Single server data encryption with a
--+++ Last updated 01/13/2020
postgresql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-infrastructure-double-encryption.md
description: Learn about using Infrastructure double encryption to add a second
--+++ Last updated 6/30/2020
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-csharp.md
Title: 'Quickstart: Connect with C# - Azure Database for PostgreSQL - Single Ser
description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server." --+++ ms.devlang: csharp
postgresql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-go.md
Title: 'Quickstart: Connect with Go - Azure Database for PostgreSQL - Single Ser
description: This quickstart provides a Go programming language sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server. --+++ ms.devlang: golang
postgresql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-nodejs.md
Title: 'Quickstart: Use Node.js to connect to Azure Database for PostgreSQL - Si
description: This quickstart provides a Node.js code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server. --+++ ms.devlang: javascript
postgresql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-php.md
Title: 'Quickstart: Connect with PHP - Azure Database for PostgreSQL - Single Se
description: This quickstart provides a PHP code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server. --+++ ms.devlang: php
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-python.md
Title: 'Quickstart: Connect with Python - Azure Database for PostgreSQL - Single
description: This quickstart provides Python code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server. --+++ ms.devlang: python
postgresql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-ruby.md
Title: 'Quickstart: Connect with Ruby - Azure Database for PostgreSQL - Single S
description: This quickstart provides a Ruby code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server. --+++ ms.devlang: ruby
postgresql Azure Pipelines Deploy Database Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/azure-pipelines-deploy-database-task.md
description: Enable Azure Database for PostgreSQL Flexible Server CLI task for
--+++ Last updated 11/30/2021
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-audit.md
description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL
--+++ Last updated 11/30/2021
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md
description: This article describes the scheduled maintenance feature in Azure D
--+++ Last updated 11/30/2021
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md
description: Learn about connectivity and networking options in the Flexible Ser
--+++ Last updated 11/30/2021
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-azure-cli.md
Title: 'Quickstart: Connect using Azure CLI - Azure Database for PostgreSQL - Fl
description: This quickstart provides several ways to connect with Azure CLI with Azure Database for PostgreSQL - Flexible Server. --+++ Last updated 11/30/2021
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-csharp.md
Title: 'Quickstart: Connect with C# - Azure Database for PostgreSQL - Flexible S
description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for PostgreSQL - Flexible Server." --+++ ms.devlang: csharp
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
Title: 'Quickstart: Use Java and JDBC with Azure Database for PostgreSQL Flexibl
description: In this quickstart, you learn how to use Java and JDBC with an Azure Database for PostgreSQL Flexible server. --+++ ms.devlang: java
postgresql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-high-availability-cli.md
Title: Manage zone redundant high availability - Azure CLI - Azure Database for
description: This article describes how to configure zone redundant high availability in Azure Database for PostgreSQL flexible Server with the Azure CLI. --+++ Last updated 11/30/2021
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-query-guide.md
--+++ Last updated 11/30/2021
postgresql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-maintenance-portal.md
Title: Azure Database for PostgreSQL - Flexible Server - Scheduled maintenance - Azure portal description: Learn how to configure scheduled maintenance settings for an Azure Database for PostgreSQL - Flexible server from the Azure portal.-- +++ Last updated 11/30/2021
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-cli.md
Title: Manage server - Azure CLI - Azure Database for PostgreSQL - Flexible Serv
description: Learn how to manage an Azure Database for PostgreSQL - Flexible Server from the Azure CLI. --+++ Last updated 11/30/2021
postgresql How To Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-portal.md
Title: 'Manage server - Azure portal - Azure Database for PostgreSQL - Flexible
description: Learn how to manage an Azure Database for PostgreSQL - Flexible Server from the Azure portal. --+++ Last updated 11/30/2021
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-cli.md
Title: Restart - Azure portal - Azure Database for PostgreSQL Flexible Server
description: This article describes how to restart operations in Azure Database for PostgreSQL through the Azure CLI. --+++ Last updated 11/30/2021
postgresql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-cli.md
Title: Restore Azure Database for PostgreSQL - Flexible Server with Azure CLI
description: This article describes how to perform restore operations in Azure Database for PsotgreSQL through the Azure CLI. --+++ Last updated 11/30/2021
postgresql How To Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-cli.md
Title: Stop/start - Azure CLI - Azure Database for PostgreSQL Flexible Server
description: This article describes how to stop/start operations in Azure Database for PostgreSQL through the Azure CLI. --+++ Last updated 11/30/2021
postgresql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshoot-cli-errors.md
Title: Troubleshoot Azure Database for PostgreSQL Flexible Server CLI errors
description: This topic gives guidance on troubleshooting common issues with Azure CLI when using PostgreSQL Flexible Server. --+++ Last updated 11/30/2021
postgresql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-connect-server-vnet.md
Title: 'Connect to Azure Database for PostgreSQL flexible server with private ac
description: This article shows how to create and connect to Azure Database for PostgreSQL flexible server with private access or virtual network using Azure portal. --+++ Last updated 11/30/2021
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-aks-database.md
Title: 'Tutorial: Deploy Django on AKS cluster with PostgreSQL Flexible Server b
description: Learn how to quickly build and deploy Django on AKS with Azure Database for PostgreSQL - Flexible Server. --+++ Last updated 11/30/2021
postgresql Tutorial Django App Service Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-app-service-postgres.md
Title: Tutorial on how to Deploy Django app with App Service and Azure Database
description: Deploy Django app with App Serice and Azure Database for PostgreSQL - Flexible Server in virtual network --+++ ms.devlang: azurecli Last updated 11/30/2021
postgresql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-webapp-server-vnet.md
Title: 'Tutorial: Create Azure Database for PostgreSQL - Flexible Server and Azu
description: Quickstart guide to create Azure Database for PostgreSQL - Flexible Server with Web App in a virtual network --+++ ms.devlang: azurecli Last updated 11/30/2021
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-connect-query-guide.md
description: Links to quickstarts showing how to connect to your Azure Database
--+++ Last updated 09/21/2020
postgresql How To Deploy Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/how-to-deploy-github-action.md
description: Use Azure PostgreSQL from a GitHub Actions workflow
--+++ Last updated 10/12/2020
postgresql Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-privatelink-cli.md
Title: Private Link - Azure CLI - Azure Database for PostgreSQL - Single server
description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure CLI --+++ Last updated 01/09/2020
postgresql Howto Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-configure-privatelink-portal.md
Title: Private Link - Azure portal - Azure Database for PostgreSQL - Single serv
description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure portal --+++ Last updated 01/09/2020
postgresql Howto Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-connection-string-powershell.md
description: This article provides an Azure PowerShell example to generate a con
--+++ Last updated 8/6/2020
postgresql Howto Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-cli.md
Title: Data encryption - Azure CLI - for Azure Database for PostgreSQL - Single
description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure CLI. --+++ Last updated 03/30/2020
postgresql Howto Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-portal.md
Title: Data encryption - Azure portal - for Azure Database for PostgreSQL - Sing
description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure portal. --+++ Last updated 01/13/2020
postgresql Howto Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-troubleshoot.md
Title: Troubleshoot data encryption - Azure Database for PostgreSQL - Single Ser
description: Learn how to troubleshoot the data encryption on your Azure Database for PostgreSQL - Single Server --+++ Last updated 02/13/2020
postgresql Howto Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-data-encryption-validation.md
Title: How to ensure validation of the Azure Database for PostgreSQL - Data encr
description: Learn how to validate the encryption of the Azure Database for PostgreSQL - Data encryption using the customers managed key. --+++ Last updated 04/28/2020
postgresql Howto Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-deny-public-network-access.md
Title: Deny Public Network Access - Azure portal - Azure Database for PostgreSQL
description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for PostgreSQL Single server --+++ Last updated 03/10/2020
postgresql Howto Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-tls-configurations.md
Title: TLS configuration - Azure portal - Azure Database for PostgreSQL - Single
description: Learn how to set TLS configuration using Azure portal for your Azure Database for PostgreSQL Single server --+++ Last updated 06/02/2020
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/asset-insights.md
Title: Asset insights on your data in Microsoft Purview
-description: This how-to guide describes how to view and use Microsoft Purview Insights asset reporting on your data.
+description: This how-to guide describes how to view and use Microsoft Purview Data Estate Insights asset reporting on your data.
Last updated 09/27/2021
This how-to guide describes how to access, view, and filter Microsoft Purview Asset insight reports for your data. > [!IMPORTANT]
-> Microsoft Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
In this how-to guide, you'll learn how to: > [!div class="checklist"]
-> * View insights from your Microsoft Purview account.
+> * View data estate insights from your Microsoft Purview account.
> * Get a bird's eye view of your data. > * Drill down for more asset count details. ## Prerequisites
-Before getting started with Microsoft Purview insights, make sure that you've completed the following steps:
+Before getting started with Microsoft Purview Data Estate Insights, make sure that you've completed the following steps:
* Set up your Azure resources and populate the account with data.
In Microsoft Purview, you can register and scan source types. Once the scan is c
:::image type="content" source="./media/asset-insights/portal-access.png" alt-text="Launch Microsoft Purview from the Azure portal":::
-1. On the Microsoft Purview **Home** page, select **Insights** on the left menu.
+1. On the Microsoft Purview **Home** page, select **Data Estate Insights** on the left menu.
- :::image type="content" source="./media/asset-insights/view-insights.png" alt-text="View your insights in the Azure portal":::
+ :::image type="content" source="./media/asset-insights/view-insights.png" alt-text="View your data estate insights in the Azure portal":::
-1. In the **Insights** area, select **Assets** to display the Microsoft Purview **Asset insights** report.
+1. In the **Data Estate Insights** area, select **Assets** to display the Microsoft Purview **Asset insights** report.
### View Asset Insights
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
Microsoft Purview uses **Collections** to organize and manage access across its
A collection is a tool Microsoft Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All accesses to Microsoft Purview's resources are managed from collections in the Microsoft Purview account itself. > [!NOTE]
-> As of November 8th, 2021, ***Insights*** is accessible to Data Curators. Data Readers do not have access to Insights.
+> As of November 8th, 2021, ***Microsoft Purview Data Estate Insights*** is accessible to Data Curators. Data Readers do not have access to Insights.
## Roles Microsoft Purview uses a set of predefined roles to control who can access what within the account. These roles are currently: - **Collection administrator** - a role for users that will need to assign roles to other users in Microsoft Purview or manage collections. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.-- **Data curators** - a role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets.
+- **Data curators** - a role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view data estate insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets.
- **Data readers** - a role that provides read-only access to data assets, classifications, classification rules, collections and glossary terms. - **Data source administrator** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles. - **Policy author (Preview)** - a role that allows a user to view, update, and delete Microsoft Purview policies through the policy management app within Microsoft Purview.
purview Classification Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/classification-insights.md
Title: Classification reporting on your data in Microsoft Purview using Microsoft Purview Insights
+ Title: Classification reporting on your data in Microsoft Purview using Microsoft Purview Data Estate Insights
description: This how-to guide describes how to view and use Microsoft Purview classification reporting on your data. Last updated 09/27/2021
-# Customer intent: As a security officer, I need to understand how to use Microsoft Purview Insights to learn about sensitive data identified and classified and labeled during scanning.
+# Customer intent: As a security officer, I need to understand how to use Microsoft Purview Data Estate Insights to learn about sensitive data identified and classified and labeled during scanning.
This how-to guide describes how to access, view, and filter Microsoft Purview Classification insight reports for your data. > [!IMPORTANT]
-> Microsoft Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Supported data sources include: Azure Blob Storage, Azure Data Lake Storage (ADLS) GEN 1, Azure Data Lake Storage (ADLS) GEN 2, Azure Cosmos DB (SQL API), Azure Synapse Analytics (formerly SQL DW), Azure SQL Database, Azure SQL Managed Instance, SQL Server, Amazon S3 buckets, and Amazon RDS databases (public preview), Power BI
In this how-to guide, you'll learn how to:
## Prerequisites
-Before getting started with Microsoft Purview insights, make sure that you've completed the following steps:
+Before getting started with Microsoft Purview Data Estate Insights, make sure that you've completed the following steps:
- Set up your Azure resources and populated the relevant accounts with test data
Before getting started with Microsoft Purview insights, make sure that you've co
For more information, see [Manage data sources in Microsoft Purview](manage-data-sources.md).
-## Use Microsoft Purview classification insights
+## Use Microsoft Purview Data Estate Insights for classifications
In Microsoft Purview, classifications are similar to subject tags, and are used to mark and identify data of a specific type that's found within your data estate during scanning.
Microsoft Purview uses the same sensitive information types as Microsoft 365, al
1. On the **Overview** page, in the **Get Started** section, select the **Microsoft Purview governance portal** tile.
-1. In Microsoft Purview, select the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Insights** area.
+1. In Microsoft Purview, select the **Data Estate Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Data Estate Insights** area.
-1. In the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Classification** to display the Microsoft Purview **Classification insights** report.
+1. In the **Data Estate Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Classification** to display the Microsoft Purview **Classification insights** report.
:::image type="content" source="./media/insights/select-classification-labeling.png" alt-text="Classification insights report" lightbox="media/insights/select-classification-labeling.png":::
purview Concept Best Practices Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-sensitivity-labels.md
The following sections walk you through the process of implementing labeling for
- Capture all test cases for your labels. Test your label policies with all applications you want to secure. - Promote sensitivity label policies to the Microsoft Purview Data Map. - Run test scans from the Microsoft Purview Data Map on different data sources like hybrid cloud and on-premises to identify sensitivity labels.-- Gather and consider insights, for example, by using Microsoft Purview Insights. Use alerting mechanisms to mitigate potential breaches of regulations.
+- Gather and consider insights, for example, by using Microsoft Purview Data Estate Insights. Use alerting mechanisms to mitigate potential breaches of regulations.
By using sensitivity labels with Microsoft Purview Data Map, you can extend information protection beyond the border of your Microsoft data estate to your on-premises, hybrid cloud, multicloud, and software as a service (SaaS) scenarios.
purview Concept Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-insights.md
This article provides an overview of the Insights feature in Microsoft Purview.
Insights are one of the key pillars of Microsoft Purview. The feature provides customers, a single pane of glass view into their catalog and further aims to provide specific insights to the data source administrators, business users, data stewards, data officer and, security administrators. Currently, Microsoft Purview has the following Insights reports that will be available to customers during Insight's public preview. > [!IMPORTANT]
-> Microsoft Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Asset Insights
purview Glossary Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/glossary-insights.md
Title: Glossary report on your data using Microsoft Purview Insights
-description: This how-to guide describes how to view and use Microsoft Purview Insights glossary reporting on your data.
+ Title: Glossary report on your data using Microsoft Purview Data Estate Insights
+description: This how-to guide describes how to view and use Microsoft Purview Data Estate Insights glossary reporting on your data.
Last updated 09/27/2021
This how-to guide describes how to access, view, and filter Microsoft Purview Glossary insight reports for your data. > [!IMPORTANT]
-> Microsoft Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
In this how-to guide, you'll learn how to:
In this how-to guide, you'll learn how to:
## Prerequisites
-Before getting started with Microsoft Purview insights, make sure that you've completed the following steps:
+Before getting started with Microsoft Purview Data Estate Insights, make sure that you've completed the following steps:
- Set up your Azure resources and populate the account with data
In Microsoft Purview, you can create glossary terms and attach them to assets. L
:::image type="content" source="./media/glossary-insights/portal-access.png" alt-text="Launch Microsoft Purview from the Azure portal":::
-1. On the Microsoft Purview **Home** page, select **Insights** on the left menu.
+1. On the Microsoft Purview **Home** page, select **Data Estate Insights** on the left menu.
- :::image type="content" source="./media/glossary-insights/view-insights.png" alt-text="View your insights in the Azure portal":::
+ :::image type="content" source="./media/glossary-insights/view-insights.png" alt-text="View your data estate insights in the Azure portal":::
-1. In the **Insights** area, select **Glossary** to display the Microsoft Purview **Glossary insights** report.
+1. In the **Data Estate Insights** area, select **Glossary** to display the Microsoft Purview **Glossary insights** report.
**Glossary Insights** provides you as a business user, valuable information to maintain a well-defined glossary for your organization.
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Microsoft Purview is a unified data governance service that helps you manage and
:::image type="content" source="./media/overview/high-level-overview.png" alt-text="High-level architecture of Microsoft Purview, showing multi-cloud and on premises sources flowing into Microsoft Purview, and Microsoft Purview's apps (Data Catalog, Map, and Insights) allowing data consumers and data curators to view and manage metadata. This metadata is also being ported to external analytics services from Microsoft Purview for more processing." lightbox="./media/overview/high-level-overview-large.png":::
+>[!TIP]
+> Looking to govern your data in Microsoft 365 by keeping what you need and deleting what you don't? Use [Microsoft Purview Data Lifecycle Management](/microsoft-365/compliance/data-lifecycle-management).
+ Microsoft Purview automates data discovery by providing data scanning and classification as a service for assets across your data estate. Metadata and descriptions of discovered data assets are integrated into a holistic map of your data estate. Atop this map, there are purpose-built apps that create environments for data discovery, access management, and insights about your data landscape.
Microsoft Purview automates data discovery by providing data scanning and classi
|-|--| |[Data Map](#data-map) | Makes your data meaningful by graphing your data assets, and their relationships, across your data estate. The data map used to discover data and manage access to that data. | |[Data Catalog](#data-catalog) | Finds trusted data sources by browsing and searching your data assets. The data catalog aligns your assets with friendly business terms and data classification to identify data sources. |
-|[Data Insights](#data-insights) | Gives you an overview of your data estate to help you discover what kinds of data you have and where. |
+|[Data Estate Insights](#data-estate-insights) | Gives you an overview of your data estate to help you discover what kinds of data you have and where. |
## Data Map Microsoft Purview Data Map provides the foundation for data discovery and effective data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Microsoft Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.0 APIs.
-Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microsoft Purview data insights as unified experiences within the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
+Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microsoft Purview Data Estate Insights as unified experiences within the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
For more information, see our [introduction to Data Map](concept-elastic-data-map.md).
For more information, see our [introduction to Data Map](concept-elastic-data-ma
With the Microsoft Purview Data Catalog, business and technical users alike can quickly & easily find relevant data using a search experience with filters based on various lenses like glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Microsoft Purview Data Catalog provides data curation features like business glossary management and ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets starting from the operational systems on-premises, through movement, transformation & enrichment with various data storage & processing systems in the cloud to consumption in an analytics system like Power BI. For more information, see our [introduction to search using Data Catalog](how-to-search-catalog.md).
-## Data Insights
-With the Microsoft Purview data insights, data officers and security officers can get a birdΓÇÖs eye view and at a glance understand what data is actively scanned, where sensitive data is and how it moves.
+## Data Estate Insights
+With the Microsoft Purview Data Estate Insights, data officers and security officers can get a birdΓÇÖs eye view and at a glance understand what data is actively scanned, where sensitive data is and how it moves.
-For more information, see our [introduction to Data Insights](concept-insights.md).
+For more information, see our [introduction to Data Estate Insights](concept-insights.md).
## Discovery challenges for data consumers
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen1.md
Scans can be managed or run again on completion.
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
Or you can follow the [generic guide for creating data access policies](how-to-d
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
Or you can follow the [generic guide for creating data access policies](how-to-d
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
-* [Data insights in Microsoft Purview](concept-insights.md)
+* [Data Estate Insights in Microsoft Purview](concept-insights.md)
* [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) * [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
Scans can be managed or run again on completion.
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-data-explorer.md
To create and run a new scan, follow these steps:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Files Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-files-storage-source.md
To create and run a new scan, follow these steps:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
To manage a scan, do the following:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-mysql-database.md
To create and run a new scan, do the following:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-postgresql.md
To create and run a new scan, do the following:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Sql Database Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database-managed-instance.md
To create and run a new scan, complete the following steps:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
You can [browse data catalog](how-to-browse-catalog.md) or [search data catalog]
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-synapse-analytics.md
To create and run a new scan, complete the following steps:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
Previously updated : 03/05/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire Cassandra instance, or sc
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
-
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
**If your data store is not publically accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it: * Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-db2.md
Previously updated : 01/20/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire Db2 database, or scope th
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.12.7984.1.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
- * Manually download a Db2 JDBC driver from [here](https://www.ibm.com/support/pages/db2-jdbc-driver-versions-and-downloads) onto your virtual machine where self-hosted integration runtime is running.
+ * Download the [Db2 JDBC driver](https://www.ibm.com/support/pages/db2-jdbc-driver-versions-and-downloads) on the machine where your self-hosted integration runtime is running. Note down the folder path which you will use to set up the scan.
> [!Note]
- > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+ > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
* The Db2 user must have the CONNECT permission. Microsoft Purview connects to the syscat tables in IBM Db2 environment when importing metadata.
To create and run a new scan, do the following:
Usage of NOT and special characters aren't acceptable.
- 1. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location.
-
- > [!Note]
- > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
+ 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, e.g. `D:\Drivers\Db2`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
1. **Maximum memory available**: Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of Db2 source to be scanned.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-erwin-source.md
Previously updated : 01/20/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire erwin Mart server, or sco
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). > [!IMPORTANT] > Make sure to install the self-hosted integration runtime and the Erwin Data Modeler software on the same machine where erwin Mart instance is running.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
Previously updated : 01/20/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire Google BigQuery project,
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
- * Download and unzip BigQuery's JDBC driver on the machine where your self-hosted integration runtime is running. You can find the driver [here](https://cloud.google.com/bigquery/providers/simba-drivers).
+ * Download and unzip the [BigQuery JDBC driver](https://cloud.google.com/bigquery/providers/simba-drivers) on the machine where your self-hosted integration runtime is running. Note down the folder path which you will use to set up the scan.
> [!Note]
- > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
+ > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
## Register
Follow the steps below to scan a Google BigQuery project to automatically identi
To understand more on credentials, refer to the link [here](manage-credentials.md).
- 1. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location.
-
- > [!Note]
- > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
+ 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, e.g. `D:\Drivers\GoogleBigQuery`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
1. **Dataset**: Specify a list of BigQuery datasets to import. For example, dataset1; dataset2. When the list is empty, all available datasets are imported.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
Previously updated : 02/25/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire Hive metastore database,
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](manage-integration-runtimes.md).
- * Ensure that [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is running.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure that Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the machine where the self-hosted integration runtime is running. If you don't have this update installed, [download it now](https://www.microsoft.com/download/details.aspx?id=30679).
- * Download the Hive Metastore database's JDBC driver on the machine where your self-hosted integration runtime is running. For example, if the database is *mssql*, download [Microsoft's JDBC driver for SQL Server](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server). If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar); version 3.0.3 isn't supported.
+ * Download the Hive Metastore database's JDBC driver on the machine where your self-hosted integration runtime is running. For example, if the database is *mssql*, download [Microsoft's JDBC driver for SQL Server](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server). If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar); version 3.0.3 isn't supported. Note down the folder path which you will use to set up the scan.
- > [!Note]
- > The driver should be accessible to all accounts in the machine. Don't install it in a user account.
+ > [!Note]
+ > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
## Register
Use the following steps to scan Hive Metastore databases to automatically identi
:::image type="content" source="media/register-scan-hive-metastore-source/databricks-credentials.png" alt-text="Screenshot that shows Azure Databricks username and password examples as property values." border="true":::
- 1. **Metastore JDBC Driver Location**: Specify the path to the JDBC driver location on your machine where the self-hosted integration runtime is running. This should be a valid path to the folder for JAR files.
+ 1. **Metastore JDBC Driver Location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, e.g. `D:\Drivers\HiveMetastore`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
> [!Note]
- > The driver should be accessible to all accounts in the machine. Don't install it in a user account.
- >
> If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar). Version 3.0.3 is not supported. 1. **Metastore JDBC Driver Class**: Provide the class name for the connection driver. For example, enter **\com.microsoft.sqlserver.jdbc.SQLServerDriver**.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, use the following guides to learn more about Microsoft Purview and your data: -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search the data catalog](how-to-search-catalog.md)
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-looker-source.md
Previously updated : 03/05/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire Looker server, or scope t
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
If your data store is publicly accessible, you can use the managed Azure integration runtime for scan without additional settings. Otherwise, if your data store limits access from on-premises network, private network or specific IPs, you need to configure a self-hosted integration runtime to connect to it: * Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mongodb.md
Previously updated : 04/12/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan one or more MongoDB database(s) ent
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.16.8093.1.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
To create and run a new scan, do the following:
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
Previously updated : 03/05/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire MySQL server, or scope th
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
**If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it: * Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
To create and run a new scan, do the following:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
Previously updated : 03/28/2022 Last updated : 05/04/2022
Currently, the Oracle service name isn't captured in the metadata or hierarchy.
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
- * Manually download an Oracle JDBC driver from [here](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) onto your virtual machine where self-hosted integration runtime is running.
+ * Download the [Oracle JDBC driver](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) on the machine where your self-hosted integration runtime is running. Note down the folder path which you will use to set up the scan.
> [!Note]
- > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+ > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
## Register
On the **Register sources (Oracle)** screen, do the following:
1. Enter a **Name** that the data source will be listed within the Catalog. 1. Enter the **Host** name to connect to an Oracle source. This can either be:
- * A host name used by JDBC to connect to the database server. For example: `MyDatabaseServer.com`
+ * A host name used to connect to the database server. For example: `MyDatabaseServer.com`
* An IP address. For example: `192.169.1.2`
-1. Enter the **Port number** used by JDBC to connect to the database server (1521 by default for Oracle).
+1. Enter the **Port number** used to connect to the database server (1521 by default for Oracle).
-1. Enter the **Oracle service name** used by JDBC to connect to the database server.
+1. Enter the **Oracle service name** (not Oracle UID) used to connect to the database server.
1. Select a collection or create a new one (Optional)
To create and run a new scan, do the following:
1. **Credential**: Select the credential to connect to your data source. Make sure to: * Select Basic Authentication while creating a credential.
- * Provide the user name used by JDBC to connect to the database server in the User name input field.
- * Store the user password used by JDBC to connect to the database server in the secret key.
+ * Provide the user name used to connect to the database server in the User name input field.
+ * Store the user password used to connect to the database server in the secret key.
1. **Schema**: List subset of schemas to import expressed as a semicolon separated list in **case-sensitive** manner. For example, `schema1; schema2`. All user schemas are imported if that list is empty. All system schemas (for example, SysAdmin) and objects are ignored by default.
To create and run a new scan, do the following:
Usage of NOT and special characters aren't acceptable.
- 1. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location.
-
- > [!Note]
- > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
+ 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, e.g. `D:\Drivers\Oracle`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
1. **Stored procedure details**: Controls the number of details imported from stored procedures:
To create and run a new scan, do the following:
1. Select **Test connection**.
+ > [!Note]
+ > Use the "Test connection" button in the scan setup UI to test the connection. The "Test Connection" in self-hosted integration runtime configuration manager UI -> Diagnostics tab does not fully validate the connectivity.
+ 1. Select **Continue**. 1. Choose your **scan trigger**. You can set up a schedule or ran the scan once.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
Previously updated : 03/05/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire PostgreSQL database, or s
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
**If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it: * Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
To create and run a new scan using Azure runtime, perform the following steps:
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan.png" alt-text="Screenshot of Save and run Power BI source.":::
-## Troubleshooting tips
-
-If delegated auth is used:
-- Check your key vault. Make sure there are no typos in the password.-- Assign proper [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) to Power BI administrator user.-- Validate if user is assigned to Power BI Administrator role.-- If user is recently created, make sure password is reset successfully and user can successfully initiate the session.- ## Next steps Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Power Bi Tenant Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-troubleshoot.md
This article explores common troubleshooting methods for scanning Power BI tenan
|||||||| | [Yes](register-scan-power-bi-tenant.md#deployment-checklist)| [Yes](register-scan-power-bi-tenant.md#deployment-checklist)| Yes | No | No | No| [Yes](how-to-lineage-powerbi.md)|
+## Troubleshooting tips
+
+If delegated auth is used:
+- Check your key vault. Make sure there are no typos in the password.
+- Assign proper [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) to Power BI administrator user.
+- Validate if user is assigned to Power BI Administrator role.
+- If user is recently created, make sure password is reset successfully and user can successfully initiate the session.
+ ## Error code: Test connection failed - AASDST50079 - **Message**: `Failed to get access token with given credential to access Power BI tenant. Authentication type PowerBIDelegated Message: AASDST50079 Due to a configuration change made by your administrator or because you moved to a new location, you must enroll in multi-factor authentication.`
This article explores common troubleshooting methods for scanning Power BI tenan
Follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Use any of the following deployment checklists during the setup or for troublesh
3. If user is recently created, login with the user at least once to make sure password is reset successfully and user can successfully initiate the session. 4. There is no MFA or Conditional Access Policies are enforced on the user. 9. Validate App registration settings to make sure:
- 5. App registration exists in your Azure Active Directory tenant.
- 6. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ 1. App registration exists in your Azure Active Directory tenant.
+ 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
1. Power BI Service Tenant.Read.All 2. Microsoft Graph openid 3. Microsoft Graph User.Read
To create and run a new scan, do the following:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
Previously updated : 03/05/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire Salesforce organization,
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
You can use the fully managed Azure integration runtime for scan - make sure to provide the security token to authenticate to Salesforce, learn more from the credential configuration in [Scan](#scan) section. Otherwise, if you want the scan to be initiated from a Salesforce trusted IP range for your organization, you can configure a self-hosted integration runtime to connect to it: * Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
To create and run a new scan, do the following:
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
Previously updated : 03/05/2022 Last updated : 05/04/2022
When scanning SAP BW source, Microsoft Purview supports extracting technical met
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.15.8079.1.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
Follow the steps below to scan SAP BW to automatically identify assets and class
1. **Client ID**: Enter the SAP Client ID. It's a three-digit numeric number from 000 to 999.
- 1. **JCo library path**: The directory path where the JCo libraries are located.
+ 1. **JCo library path**: Specify the directory path where the JCo libraries are located, e.g. `D:\Drivers\SAPJCo`. Make sure the path is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
1. **Maximum memory available:** Maximum memory (in GB) available on the Self-hosted Integration Runtime machine to be used by scanning processes. This is dependent on the size of SAP BW source to be scanned.
Follow the steps below to scan SAP BW to automatically identify assets and class
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. - [Search Data Catalog](how-to-search-catalog.md)-- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Supported data sources and file types](azure-purview-connector-overview.md)
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
Previously updated : 01/11/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan an entire SAP HANA database, or sco
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.13.8013.1.
- * Ensure that [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is running.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure that Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the machine where the self-hosted integration runtime is running. If you don't have this update installed, [download it now](https://www.microsoft.com/download/details.aspx?id=30679).
- * Download the SAP HANA JDBC driver ([JAR ngdbc](https://mvnrepository.com/artifact/com.sap.cloud.db.jdbc/ngdbc)) on the machine where your self-hosted integration runtime is running.
+ * Download the SAP HANA JDBC driver ([JAR ngdbc](https://mvnrepository.com/artifact/com.sap.cloud.db.jdbc/ngdbc)) on the machine where your self-hosted integration runtime is running. Note down the folder path which you will use to set up the scan.
> [!Note]
- > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
+ > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
### Required permissions for scan
The supported authentication type for a SAP HANA source is **Basic authenticatio
Usage of NOT and special characters aren't acceptable.
- 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running. This should be the path to valid JAR folder location. Don't include the name of the driver in the path.
+ 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, e.g. `D:\Drivers\SAPHANA`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
1. **Maximum memory available**: Maximum memory (in gigabytes) available on the customer's machine for the scanning processes to use. This value is dependent on the size of SAP HANA database to be scanned.
The supported authentication type for a SAP HANA source is **Basic authenticatio
Now that you've registered your source, use the following guides to learn more about Microsoft Purview and your data: -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search the data catalog](how-to-search-catalog.md)
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sapecc-source.md
Previously updated : 01/20/2022 Last updated : 05/04/2022
When scanning SAP ECC source, Microsoft Purview supports:
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). >[!NOTE] >Scanning SAP ECC is a memory intensive operation, you are recommended to install Self-hosted Integration Runtime on a machine with at least 128GB RAM.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
When scanning SAP ECC source, Microsoft Purview supports:
:::image type="content" source="media/register-scan-saps4hana-source/requirement.png" alt-text="pre-requisite" border="true":::
- * The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Make sure the Java Connector is available on your virtual machine where self-hosted integration runtime is installed. Make sure that you're using the correct JCo distribution for your environment. For example: on a Microsoft Windows machine, make sure the sapjco3.jar and sapjco3.dll files are available.
+ * The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Make sure the Java Connector is available on your virtual machine where self-hosted integration runtime is installed. Make sure that you're using the correct JCo distribution for your environment. For example: on a Microsoft Windows machine, make sure the sapjco3.jar and sapjco3.dll files are available. Note down the folder path which you will use to set up the scan.
> [!Note]
- > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+ > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You'll need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules: * STFC_CONNECTION (check connectivity)
Follow the steps below to scan SAP ECC to automatically identify assets and clas
1. **Client ID**: Enter the SAP Client ID. This is a three-digit numeric number from 000 to 999.
- 1. **JCo library path**: The directory path where the JCo libraries are located.
+ 1. **JCo library path**: Specify the directory path where the JCo libraries are located, e.g. `D:\Drivers\SAPJCo`. Make sure the path is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
1. **Maximum memory available:** Maximum memory (in GB) available on the Self-hosted Integration Runtime machine to be used by scanning processes. This is dependent on the size of SAP ECC source to be scanned. It's recommended to provide large available memory, for example, 100.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
When scanning SAP S/4HANA source, Microsoft Purview supports:
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). >[!NOTE] >Scanning SAP S/4HANA is a memory intensive operation, you are recommended to install Self-hosted Integration Runtime on a machine with at least 128GB RAM.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
When scanning SAP S/4HANA source, Microsoft Purview supports:
:::image type="content" source="media/register-scan-saps4hana-source/requirement.png" alt-text="pre-requisite" border="true":::
- * The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Hence make sure the Java Connector is available on your virtual machine where self-hosted integration runtime is installed. Make sure that you're using the correct JCo distribution for your environment. For example, on a Microsoft Windows machine, make sure the sapjco3.jar and sapjco3.dll files are available.
+ * The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Hence make sure the Java Connector is available on your virtual machine where self-hosted integration runtime is installed. Make sure that you're using the correct JCo distribution for your environment. For example, on a Microsoft Windows machine, make sure the sapjco3.jar and sapjco3.dll files are available. Note down the folder path which you will use to set up the scan.
> [!Note]
- >The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+ > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You'll need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules: * STFC_CONNECTION (check connectivity)
Follow the steps below to scan SAP S/4HANA to automatically identify assets and
1. **Client ID:** Enter here the SAP system client ID. The client is identified with three-digit numeric number from 000 to 999.
- 1. **JCo library path**: Specify the path to the folder where the JCo libraries are located.
+ 1. **JCo library path**: Specify the directory path where the JCo libraries are located, e.g. `D:\Drivers\SAPJCo`. Make sure the path is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
1. **Maximum memory available:** Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of SAP S/4HANA source to be scanned. It's recommended to provide large available memory, for example, 100.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
Previously updated : 03/05/2022 Last updated : 05/04/2022
When setting up scan, you can choose to scan one or more Snowflake database(s) e
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
**If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it: * Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7971.2.
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
To create and run a new scan, do the following:
Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
Previously updated : 03/14/2022 Last updated : 05/04/2022
To retrieve data types of view columns, Microsoft Purview issues a prepare state
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
- * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
- * You'll have to manually download Teradata's JDBC Driver on your virtual machine where self-hosted integration runtime is running. The executable JAR file can be downloaded from the Teradata [website](https://downloads.teradata.com/).
+ * Download the [Teradata JDBC driver](https://downloads.teradata.com/) on the machine where your self-hosted integration runtime is running. Note down the folder path which you will use to set up the scan.
> [!Note]
- > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
+ > The driver should be accessible by the self-hosted integration runtime. By default, self-hosted integration runtime uses [local service account "NT SERVICE\DIAHostService"](manage-integration-runtimes.md#service-account-for-self-hosted-integration-runtime). Make sure it has "Read and execute" and "List folder contents" permission to the driver folder.
## Register
Follow the steps below to scan Teradata to automatically identify assets and cla
Usage of NOT and special characters aren't acceptable
- 1. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location.
+ 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running, e.g. `D:\Drivers\Teradata`. It's the path to valid JAR folder location. Make sure the driver is accessible by the self-hosted integration runtime, learn more from [prerequisites section](#prerequisites).
1. **Stored procedure details**: Controls the number of details imported from stored procedures:
Go to the asset -> lineage tab, you can see the asset relationship when applicab
Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data. -- [Data insights in Microsoft Purview](concept-insights.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
purview Sensitivity Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/sensitivity-insights.md
Title: Sensitivity label reporting on your data in Microsoft Purview using Microsoft Purview Insights
+ Title: Sensitivity label reporting on your data in Microsoft Purview using Microsoft Purview Data Estate Insights
description: This how-to guide describes how to view and use sensitivity label reporting on your data.
Last updated 04/22/2022
-# Customer intent: As a security officer, I need to understand how to use Microsoft Purview Insights to learn about sensitive data identified and classified and labeled during scanning.
+# Customer intent: As a security officer, I need to understand how to use Microsoft Purview Data Estate Insights to learn about sensitive data identified and classified and labeled during scanning.
This how-to guide describes how to access, view, and filter security insights provided by sensitivity labels applied to your data. > [!IMPORTANT]
-> Sensitivity labels in Microsoft Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Sensitivity labels in Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Supported data sources include: Azure Blob Storage, Azure Data Lake Storage (ADLS) GEN 1, Azure Data Lake Storage (ADLS) GEN 2, SQL Server, Azure SQL Database, Azure SQL Managed Instance, Amazon S3 buckets, Amazon RDS databases (public preview), Power BI
In this how-to guide, you'll learn how to:
## Prerequisites
-Before getting started with Microsoft Purview Insights, make sure that you've completed the following steps:
+Before getting started with Microsoft Purview Data Estate Insights, make sure that you've completed the following steps:
- Set up your Azure resources and populated the relevant accounts with test data
Before getting started with Microsoft Purview Insights, make sure that you've co
For more information, see [Manage data sources in Microsoft Purview](manage-data-sources.md) and [Automatically label your data in Microsoft Purview](create-sensitivity-label.md).
-## Use Microsoft Purview Insights for sensitivity labels
+## Use Microsoft Purview Data Estate Insights for sensitivity labels
Classifications are similar to subject tags, and are used to mark and identify data of a specific type that's found within your data estate during scanning.
Classifications are matched directly, such as a social security number, which ha
In contrast, sensitivity labels are applied when one or more classifications and conditions are found together. In this context, [conditions](/microsoft-365/compliance/apply-sensitivity-label-automatically) refer to all the parameters that you can define for unstructured data, such as **proximity to another classification**, and **% confidence**.
-Microsoft Purview Insights uses the same classifications, also known as [sensitive information types](/microsoft-365/compliance/sensitive-information-type-entity-definitions), as those used with Microsoft 365 apps and services. This enables you to extend your existing sensitivity labels to assets in the data map.
+Microsoft Purview Data Estate Insights uses the same classifications, also known as [sensitive information types](/microsoft-365/compliance/sensitive-information-type-entity-definitions), as those used with Microsoft 365 apps and services. This enables you to extend your existing sensitivity labels to assets in the data map.
> [!NOTE] > After you have scanned your source types, give **Sensitivity labeling** Insights a couple of hours to reflect the new assets.
Microsoft Purview Insights uses the same classifications, also known as [sensiti
1. On the **Overview** page, in the **Get Started** section, select the **Launch Microsoft Purview account** tile.
-1. In Microsoft Purview, select the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Insights** area.
+1. In Microsoft Purview, select the **Data Estate Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Data Estate Insights** area.
-1. In the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Sensitivity labels** to display the Microsoft Purview **Sensitivity labeling insights** report.
+1. In the **Data Estate Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Sensitivity labels** to display the Microsoft Purview **Sensitivity labeling insights** report.
> [!NOTE] > If this report is empty, you may not have extended your sensitivity labels to Microsoft Purview Data Map. For more information, see [Labeling in the Microsoft Purview Data Map](create-sensitivity-label.md).
remote-rendering Object Bounds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/concepts/object-bounds.md
Title: Object bounds
-description: Explains how spatial object bounds can be queried
+description: Learn how spatial object bounds can be queried. Object bounds represent the volume that an entity and its children occupy.
Last updated 02/03/2020 -+
+- devx-track-csharp
+- kr2b-contr-experiment
# Object bounds
-Object bounds represent the volume that an [entity](entities.md) and its children occupy. In Azure Remote Rendering, object bounds are always given as *axis aligned bounding boxes* (AABB). Object bounds can be either in *local space* or in *world space*. Either way, they are always axis-aligned, which means the extents and volume may differ between the local and world space representation.
+Object bounds represent the volume that an [entity](entities.md) and its children occupy. In Azure Remote Rendering, object bounds are always given as *axis aligned bounding boxes* (AABB). Object bounds can be either in *local space* or in *world space*. Either way, they're always axis-aligned, which means the extents and volume may differ between the local and world space representation.
## Querying object bounds
-The local axis aligned bounding box of a [mesh](meshes.md) can be queried directly from the mesh resource. These bounds can be transformed into the local space or world space of an entity using the entity's transform.
+The local axis aligned bounding box of a mesh can be queried directly from the mesh resource. These bounds can be transformed into the local space or world space of an entity using the entity's transform. For more information, see [Meshes](meshes.md).
-It's possible to compute the bounds of an entire object hierarchy this way, but that requires to traverse the hierarchy, query the bounds for each mesh, and combine them manually. This operation is both tedious and inefficient.
+It's possible to compute the bounds of an entire object hierarchy this way. That approach requires traversing the hierarchy, querying the bounds for each mesh, and combining them manually. This operation is both tedious and inefficient.
-A better way is to call `QueryLocalBoundsAsync` or `QueryWorldBoundsAsync` on an entity. The computation is then offloaded to the server and returned with minimal delay.
+A better way is to call `QueryLocalBoundsAsync` or `QueryWorldBoundsAsync` on an entity. This approach offloads computation to the server and returns with minimal delay.
```cs public async void GetBounds(Entity entity)
remote-rendering Override Materials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/override-materials.md
Title: Override materials during model conversion
-description: Explains the material overriding workflow at conversion time
+description: Learn about the material overriding workflow at conversion time. Material settings in the source model define the PBR materials used by the renderer.
Last updated 02/13/2020 + # Override materials during model conversion
-The material settings in the source model are used to define the [PBR materials](../../overview/features/pbr-materials.md) used by the renderer.
-Sometimes the [default conversion](../../reference/material-mapping.md) doesn't give the desired results and you need to make changes.
+The material settings in the source model define the [PBR materials](../../overview/features/pbr-materials.md) used by the renderer.
+Sometimes the default conversion doesn't give the desired results and you need to make changes. For more information, see [Material mapping for model formats](../../reference/material-mapping.md).
+ When a model is converted for use in Azure Remote Rendering, you can provide a material override file to customize how material conversion is done on a per-material basis.
-If a file called `<modelName>.MaterialOverrides.json` is found in the input container beside the input model `<modelName>.<ext>`, then it will be used as the material override file.
+If a file called *\<modelName>.MaterialOverrides.json* is found in the input container with the input model *\<modelName>.\<ext>*, it's used as the material override file.
## The override file used during conversion
-As a simple example, let's say that a box model has a single material, called "Default".
-Additionally, let's say its albedo color needs to be adjusted for use in ARR.
-In this case, a `box.MaterialOverrides.json` file can be created as follows:
+As a simple example, take a box model that has a single material, called `Default`.
+Its albedo color needs to be adjusted for use in Remote Rendering.
+In this case, a *box.MaterialOverrides.json* file can be created as follows:
```json [
In this case, a `box.MaterialOverrides.json` file can be created as follows:
] ```
-The `box.MaterialOverrides.json` file is placed in the input container beside `box.fbx`, which tells the conversion service to apply the new settings.
+The *box.MaterialOverrides.json* file is placed in the input container with *box.fbx*, which tells the conversion service to apply the new settings.
### Color materials
-The [color material](../../overview/features/color-materials.md) model describes a constantly shaded surface that is independent of lighting.
-Color materials are useful for assets made by Photogrammetry algorithms, for example.
+The color material model describes a constantly shaded surface that is independent of lighting.
+Color materials are useful for assets made by Photogrammetry algorithms, for example. For more information, see [Color materials](../../overview/features/color-materials.md).
In material override files, a material can be declared to be a color material by setting `unlit` to `true`. ```json
In material override files, a material can be declared to be a color material by
### Ignore specific texture maps
-Sometimes you might want the conversion process to ignore specific texture maps. This might be the case when your model was generated by a tool that generates special maps not understood correctly by the renderer. For example, an "OpacityMap" that is used to define something other than opacity, or a model where the "NormalMap" is stored as "BumpMap". (In the latter case you want to ignore "NormalMap", which will cause the converter to use "BumpMap" as "NormalMap".)
+Sometimes you might want the conversion process to ignore specific texture maps. This situation might happen when your model was generated by a tool that generates special maps not understood by the renderer. For example, an "OpacityMap" might be used to define something other than opacity, or the "NormalMap" is stored as "BumpMap". In the latter case you want to ignore "NormalMap", which causes the converter to use "BumpMap" as "NormalMap".
-The principle is simple. Just add a property called `ignoreTextureMaps` and add any texture map you want to ignore:
+Add a property called `ignoreTextureMaps` and add any texture map you want to ignore:
```json [
The principle is simple. Just add a property called `ignoreTextureMaps` and add
] ```
-For the full list of texture maps you can ignore, see the JSON schema below.
+For the full list of texture maps you can ignore, see the [JSON schema](#json-schema).
### Applying the same overrides to multiple materials By default, an entry in the material overrides file applies when its name matches the material name exactly.
-Since it's quite common that the same override should apply to multiple materials, you can optionally provide a regular expression as the entry name.
+Since it's common that the same override should apply to multiple materials, you can optionally provide a regular expression as the entry name.
The field `nameMatching` has a default value `exact`, but it can be set to `regex` to state that the entry should apply to every matching material.
-The syntax used is the same as that used for JavaScript.
-The following example shows an override which applies to materials with names like "Material2", "Material01" and "Material999".
+The syntax used is the same syntax used for JavaScript.
+The following example shows an override that applies to materials with names like `Material2`, `Material01` and `Material999`.
```json [
The following example shows an override which applies to materials with names li
] ```
-At most one entry in a material override file applies to a single material.
-If there is an exact match (i.e. `nameMatching` is absent or equals `exact`) for the material name, then that entry is chosen.
+At most, one entry in a material override file applies to a single material.
+If there's an exact match (that is, `nameMatching` is absent or equals `exact`) for the material name, then that entry is chosen.
Otherwise, the first regex entry in the file that matches the material name is chosen. ### Getting information about which entries applied
-The [info file](get-information.md#information-about-a-converted-model-the-info-file) written to the output container carries information about the number of overrides provided, and the number of materials that were overridden.
+The info file written to the output container carries information about the number of overrides provided, and the number of materials that were overridden. For more information, see [Information about a converted model](get-information.md#information-about-a-converted-model-the-info-file).
## JSON schema
-The full JSON schema for materials files is given here. With the exception of `unlit` and `ignoreTextureMaps`, the properties available are a subset of the properties described in the sections on the [color material](../../overview/features/color-materials.md) and [PBR material](../../overview/features/pbr-materials.md) models.
+The full JSON schema for materials files is given here. Except for `unlit` and `ignoreTextureMaps`, the properties available are a subset of the properties described in the sections on the [color material](../../overview/features/color-materials.md) and [PBR material](../../overview/features/pbr-materials.md) models.
```json {
remote-rendering Z Fighting Mitigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/z-fighting-mitigation.md
Title: Z-fighting mitigation
-description: Describes techniques to mitigate z-fighting artifacts
+description: Learn about techniques to mitigate z-fighting artifacts that occur when surfaces overlap and it isn't clear which one should be rendered on top.
Last updated 02/06/2020--++
+- devx-track-csharp
+- kr2b-contr-experiment
# Z-fighting mitigation
-When two surfaces overlap, it is not clear which one should be rendered on top of the other. The result even varies per pixel, resulting in camera view-dependent artifacts. Consequently, when the camera or the mesh moves, these patterns flicker noticeably. This artifact is called *z-fighting*. For AR and VR applications, the problem is intensified because head-mounted devices naturally always move. To prevent viewer discomfort z-fighting mitigation functionality is available in Azure Remote Rendering.
+When two surfaces overlap, it isn't clear which one should be rendered on top of the other. The result even varies per pixel, resulting in camera view-dependent artifacts. When the camera or the mesh moves, these patterns flicker noticeably. This artifact is called *z-fighting*. For augmented reality and virtual reality applications, the problem is intensified because head-mounted devices naturally always move. To prevent viewer discomfort, Azure Remote Rendering offers z-fighting mitigation functionality.
## Z-fighting mitigation modes |Situation | Result | ||:-|
-|Regular z-fighting |![No deterministic precedence between red and green quads](./media/zfighting-0.png)|
-|Z-fighting mitigation enabled |![Red quad has precedence](./media/zfighting-1.png)|
-|Checkerboard highlighting enabled|![Red and green quad toggle preference in checkerboard pattern](./media/zfighting-2.png)|
+|Regular z-fighting |![Screenshot shows no deterministic precedence between red and green quads.](./media/zfighting-0.png)|
+|Z-fighting mitigation enabled |![Screenshot displays the red quad precedence with a solid red rectangle.](./media/zfighting-1.png)|
+|Checkerboard highlighting enabled|![Screenshot shows red and green quad toggle preference with a checkerboard pattern rectangle.](./media/zfighting-2.png)|
The following code enables z-fighting mitigation:
void EnableZFightingMitigation(ApiHandle<RenderingSession> session, bool highlig
Z-fighting happens mainly for two reasons:
-1. when surfaces are very far away from the camera, the precision of their depth values degrades and the values become indistinguishable
-1. when surfaces in a mesh physically overlap
+* When surfaces are very far away from the camera, the precision of their depth values degrades and the values become indistinguishable
+* When surfaces in a mesh physically overlap
-The first problem can always happen and is difficult to eliminate. If this happens in your application, make sure that the ratio of the *near plane* distance to the *far plane* distance is as low as practical. For example, a near plane at distance 0.01 and far plane at distance 1000 will create this problem much earlier, than having the near plane at 0.1 and the far plane at distance 20.
+The first problem can always happen and is difficult to eliminate. If this situation happens in your application, make sure that the ratio of the *near plane* distance to the *far plane* distance is as low as practical. For example, a near plane at distance 0.01 and far plane at distance 1000 creates this problem much earlier than having the near plane at 0.1 and the far plane at distance 20.
-The second problem is an indicator for badly authored content. In the real world, two objects can't be in the same place at the same time. Depending on the application, users might want to know whether overlapping surfaces exist and where they are. For example, a CAD scene of a building that is the basis for a real world construction, shouldn't contain physically impossible surface intersections. To allow for visual inspection, the highlighting mode is available, which displays potential z-fighting as an animated checkerboard pattern.
+The second problem is an indication of badly authored content. In the real world, two objects can't be in the same place at the same time. Depending on the application, users might want to know whether overlapping surfaces exist and where they are. For example, a CAD scene of a building that is the basis for a real world construction, shouldn't contain physically impossible surface intersections. To allow for visual inspection, the highlighting mode is available, which displays potential z-fighting as an animated checkerboard pattern.
## Limitations
-The provided z-fighting mitigation is a best effort. There is no guarantee that it removes all z-fighting. Also it will automatically prefer one surface over another. Thus when you have surfaces that are too close to each other, it might happen that the "wrong" surface ends up on top. A common problem case is when text and other decals are applied to a surface. With z-fighting mitigation enabled these details could easily just vanish.
+The provided z-fighting mitigation is a best effort. There's no guarantee that it removes all z-fighting. Also, mitigation prefers one surface over another. When you have surfaces that are too close to each other, the "wrong" surface ends up on top. A common problem case is when text and other decals are applied to a surface. With z-fighting mitigation enabled, these details could easily just vanish.
## Performance considerations
remote-rendering Deploy Native Cpp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/quickstarts/native-cpp/hololens/deploy-native-cpp-tutorial.md
Title: Deploy native C++ WMR tutorial to HoloLens
-description: Quickstart that shows how to run the native C++ HolographicApp tutorial on HoloLens
+ Title: Deploy native C++ Windows Mixed Reality tutorial to HoloLens
+description: In this quickstart, learn to run the native C++ HolographicApp tutorial on HoloLens. Build the tutorial application, change credentials, and run the sample.
Last updated 06/08/2020 -+
+- mode-api
+- kr2b-contr-experiment
# Quickstart: Deploy native C++ WMR sample to HoloLens
-This quickstart covers how to deploy and run the native C++ WMR (Windows Mixed Reality) tutorial application on a HoloLens 2.
+This quickstart covers how to deploy and run the native C++ Windows Mixed Reality (WMR) tutorial application on a HoloLens 2.
-In this quickstart you will learn how to:
+In this quickstart you'll learn how to:
> [!div class="checklist"] > >* Build the tutorial application for HoloLens.
->* Change the ARR credentials in the source code.
+>* Change the Azure Remote Rendering credentials in the source code.
>* Deploy and run the sample on the device. ## Prerequisites
-To get access to the Azure Remote Rendering service, you first need to [create an account](../../../how-tos/create-an-account.md).
+To get access to the Remote Rendering service, you first need to [create an account](../../../how-tos/create-an-account.md).
The following software must be installed:
-* Windows SDK 10.0.18362.0 [(download)](https://developer.microsoft.com/windows/downloads/windows-10-sdk)
-* The latest version of Visual Studio 2019 [(download)](https://visualstudio.microsoft.com/vs/older-downloads/)
-* [Visual Studio tools for Mixed Reality](/windows/mixed-reality/install-the-tools). Specifically, the following *Workload* installations are mandatory:
+* [Windows SDK 10.0.18362.0](https://developer.microsoft.com/windows/downloads/windows-10-sdk) or later.
+* [The latest version of Visual Studio 2019](https://visualstudio.microsoft.com/vs/older-downloads/).
+* [Visual Studio tools for Mixed Reality](/windows/mixed-reality/install-the-tools). Specifically, the following *Workload* installations are required:
* **Desktop development with C++** * **Universal Windows Platform (UWP) development**
-* GIT [(download)](https://git-scm.com/downloads)
+* [GIT](https://git-scm.com/downloads).
-## Clone the ARR samples repository
+## Clone the Remote Rendering samples repository
-As a first step, we clone the Git repository, which houses the global Azure Remote Rendering samples. Open a command prompt (type `cmd` in the Windows start menu) and change to a directory where you want to store the ARR sample project.
+As a first step, clone the Git repository, which houses the global Azure Remote Rendering samples. Type `cmd` in the Windows Start menu to open a command prompt window. Change to a directory where you want to store the ARR sample project.
Run the following commands:
cd ARR
git clone https://github.com/Azure/azure-remote-rendering ```
-The last command creates a subdirectory in the ARR directory containing the various sample projects for Azure Remote Rendering.
+The last command creates a folder in the ARR folder that contains the various sample projects for Azure Remote Rendering.
-The C++ HoloLens tutorial can be found in the subdirectory *NativeCpp/HoloLens-Wmr*.
+The C++ HoloLens tutorial can be found in the folder *NativeCpp/HoloLens-Wmr*.
## Build the project
-Open the solution file *HolographicApp.sln* located in the *NativeCpp/HoloLens-Wmr* subdirectory with Visual Studio 2019.
+Open the solution file *HolographicApp.sln* located in the *NativeCpp/HoloLens-Wmr* folder with Visual Studio 2019.
-Switch the build configuration to *Debug* (or *Release*) and *ARM64*. Also make sure the debugger mode is set to *Device* as opposed to *Remote Machine*:
+Switch the build configuration to *Debug* (or *Release*) and *ARM64*. Make sure the debugger mode is set to *Device* as opposed to *Remote Machine*:
-![Visual Studio config](media/vs-config-native-cpp-tutorial.png)
+![Screenshot shows the Visual Studio configuration area with values as described.](media/vs-config-native-cpp-tutorial.png)
-Since the account credentials are hardcoded in the tutorial's source code, change them to valid credentials. For that, open the file `HolographicAppMain.cpp` inside Visual Studio and change the part where the client is created inside the constructor of class `HolographicAppMain`:
+Since the account credentials are hardcoded in the tutorial's source code, change them to valid credentials. Open the file *HolographicAppMain.cpp* inside Visual Studio and change the part where the client is created inside the constructor of class `HolographicAppMain`:
```cpp // 2. Create Client
Since the account credentials are hardcoded in the tutorial's source code, chang
``` Specifically, change the following values:
-* `init.AccountId`, `init.AccountKey`, and `init.AccountDomain` to use your account data. See the paragraph about how to [retrieve account information](../../../how-tos/create-an-account.md#retrieve-the-account-information).
+
+* `init.AccountId`, `init.AccountKey`, and `init.AccountDomain` to use your account data. See the section about how to [retrieve account information](../../../how-tos/create-an-account.md#retrieve-the-account-information).
* Specify where to create the remote rendering session by modifying the region part of the `init.RemoteRenderingDomain` string for other [regions](../../../reference/regions.md) than `westus2`, for instance `"westeurope.mixedreality.azure.com"`.
-* In addition, `m_sessionOverride` can be changed to an existing session ID. Sessions can be created outside this sample, for instance by using [the PowerShell script](../../../samples/powershell-example-scripts.md#script-renderingsessionps1) or using the [session REST API](../../../how-tos/session-rest-api.md) directly.
-Creating a session outside the sample is recommended when the sample should run multiple times. If no session is passed in, the sample will create a new session upon each startup, which may take several minutes.
+* In addition, `m_sessionOverride` can be changed to an existing session ID. Sessions can be created outside this sample. For more information, see [RenderingSession.ps1](../../../samples/powershell-example-scripts.md#script-renderingsessionps1) or [Use the session management REST API](../../../how-tos/session-rest-api.md) directly.
+
+Creating a session outside the sample is recommended when the sample should run multiple times. If no session is passed in, the sample creates a session upon each startup, which may take several minutes.
-Now the application can be compiled.
+Now you can compile the application.
## Launch the application 1. Connect the HoloLens with a USB cable to your PC. 1. Turn on the HoloLens and wait until the start menu shows up.
-1. Start the Debugger in Visual Studio (F5). It will automatically deploy the app to the device.
+1. Start the Debugger in Visual Studio (F5). It automatically deploys the app to the device.
-The sample app should launch and a text panel should appear that informs you about the current application state. The status at startup time is either starting a new session or connecting to an existing session. After model loading has completed, the built-in engine model appears right at your head position. Occlusion-wise, the engine model interacts properly with the spinning cube that is rendered locally.
+The sample app launches and a text panel appears that informs you about the current application state. The status at startup time is either starting a new session or connecting to an existing session. After model loading finishes, the built-in engine model appears right at your head position. Occlusion-wise, the engine model interacts properly with the spinning cube that is rendered locally.
- If you want to launch the sample a second time later, you can also find it from the HoloLens start menu, but note it may have an expired session ID compiled into it.
+ If you want to launch the sample again later, you can also find it from the HoloLens start menu. It might have an expired session ID compiled into it.
## Next steps
remote-rendering Arr Inspector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/resources/tools/arr-inspector.md
Title: The ArrInspector inspection tool
-description: User manual of the ArrInspector tool
+description: The ArrInspector is a web-based tool used to inspect a running Azure Remote Rendering session. Learn how to use this tool for debugging.
Last updated 03/09/2020 + # The ArrInspector inspection tool
-The ArrInspector is a web-based tool used to inspect a running Azure Remote Rendering session. It is meant to be used for debugging purposes, to inspect the structure of the scene being rendered, show the log messages, and monitor the live performance on the server.
+The ArrInspector is a web-based tool used to inspect a running Azure Remote Rendering session. It's meant to be used for debugging purposes, to inspect the structure of the scene being rendered, show the log messages, and monitor the live performance on the server.
-![ArrInspector](./media/arr-inspector.png)
+![Screenshot shows the ArrInspector tool interface.](./media/arr-inspector.png)
## Connecting to the ArrInspector
-Once you obtain the hostname (ending in `mixedreality.azure.com`) of your ARR server, connect using [ConnectToArrInspectorAsync](../../how-tos/frontend-apis.md#connect-to-arr-inspector). This function creates a `StartArrInspector.html` on the device on which the application is running. To launch ArrInspector, open that file with a browser (Edge, Firefox, or Chrome) on a PC. The file is only valid for 24 hours.
+Once you obtain the hostname (ending in `mixedreality.azure.com`) of your Remote Rendering server, connect using `ConnectToArrInspectorAsync`. See [Connect to ARR inspector](../../how-tos/frontend-apis.md#connect-to-arr-inspector). This function creates a *StartArrInspector.html* page on the device on which the application runs. To launch ArrInspector, open that file with a browser on a PC. The file is only valid for 24 hours.
If the app that calls `ConnectToArrInspectorAsync` is already running on a PC:
-* If you are using the Unity integration, it may get launched automatically for you.
-* Otherwise, you will find the file in *User Folders\\LocalAppData\\[your_app]\\AC\\Temp*.
+* If you're using the Unity integration, it may get launched automatically for you.
+* Otherwise, you'll find the file in *User Folders\\LocalAppData\\[your_app]\\AC\\Temp*.
If the app is running on a HoloLens:
If the app is running on a HoloLens:
## The Performance panel
-![The Performance Panel](./media/performance-panel.png)
+![Screenshot shows the ArrInspector Performance panel.](./media/performance-panel.png)
This panel shows graphs of all per-frame performance values exposed by the server. The values currently include the frame time, FPS, CPU and memory usage, memory stats like overall RAM usage, object counts, etc.
-To visualize one of these parameters, click the **Add New** button and select one of the available values shown in the dialog. This action adds a new scrolling chart to the panel, tracing the values in real time. On its right you can see the *minimum*, *maximum* and *current* value.
+To visualize one of these parameters, select the **Add New** button and select one of the available values shown in the dialog box. This action adds a new scrolling chart to the panel, tracing the values in real time. On its right you can see the *minimum*, *maximum* and *current* value.
You can pan the graph, by dragging its content with the mouse, however, panning horizontally is only possible when ArrInspector is in the paused state.
-Holding CTRL while dragging, allows you to zoom. Horizontal zoom can also be controlled with the slider at the bottom.
+Holding **Ctrl** while dragging, allows you to zoom. Horizontal zoom can also be controlled with the slider at the bottom.
-The vertical range is by default computed based on the values currently displayed, and min and max values are shown in the text-boxes on the right. When the values are set manually, either by typing them directly into the textbox, or by panning/zooming, the graph will use those values. To restore the automatic vertical framing, click the icon in the top-right corner.
+The vertical range is by default computed based on the values currently displayed, and min and max values are shown in the text-boxes on the right. When the values are set manually, either by typing them directly into the textbox, or by panning/zooming, the graph uses those values. To restore the automatic vertical framing, select the icon in the top-right corner.
-![vertical range](./media/vertical-range.png)
+![Screenshot shows the vertical range minimum and maximum values.](./media/vertical-range.png)
## The Log panel
-![Log Panel](./media/log-panel.png)
+![Screenshot shows the Log panel, which displays log messages.](./media/log-panel.png)
-The log panel shows a list of log messages generated on the server side. On connection it will show up to 200 previous log messages, and will print new ones as they happen.
+The Log panel shows a list of log messages generated on the server side. On connection it shows up to 200 previous log messages, and prints new ones as they happen.
You can filter the list based on the log type `[Error/Warning/Info/Debug]` using the buttons at the top.
-![Log Filter Buttons](./media/log-filter.png)
## The Timing Data Capture panel
-![Timing Data Capture](./media/timing-data-capture.png)
+![Screenshot shows the Timing Data Capture panel.](./media/timing-data-capture.png)
This panel is used to capture timing information from the server and download it. The file uses the [Chrome Tracing JSON format](https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU/edit). To inspect the data, open Chrome on the URL `Chrome://tracing` and drag-and-drop the downloaded file into the page. The timing data is continuously collected in a fixed-size ring-buffer. When written out, the capture only includes information about the immediate past, meaning a couple of seconds to few minutes. ## The Scene Inspection panel
-![Scene Inspection Panel](./media/scene-inspection-panel.png)
+![Screenshot shows the Scene Inspection panel with FORWARD selected.](./media/scene-inspection-panel.png)
-This panel shows the structure of the rendered scene. The object hierarchy is on the left, the content of the selected object is on the right. The panel is read-only and is updated in real time.
+The Scene Inspection panel shows the structure of the rendered scene. The object hierarchy is on the left, the content of the selected object is on the right. The panel is read-only and is updated in real time.
## The VM Debug Information panel
-![VM Debug Information Panel](./media/state-debugger-panel.png)
+![Screenshot shows the V M Debug Information panel.](./media/state-debugger-panel.png)
-This panel offers some debug functionality.
+The VM Debug Information panel offers some debug functionality.
### Restart service
-The **Restart Service** button restarts the runtime on the virtual machine that arrInspector is connected to. Any attached client will get disconnected and the arrInspector page must be reloaded to connect to the restarted service.
+The **Restart Service** button restarts the runtime on the virtual machine that ArrInspector is connected to. Any attached client gets disconnected and the ArrInspector page must be reloaded to connect to the restarted service.
### Collect debug information
-The **Collect Debug Information for VM** button opens a dialog that allows you to trigger the ARR instance to collect debug information on the VM:
+The **Collect Debug Information for VM** button allows you to trigger the Remote Rendering instance to collect debug information on the virtual machine:
-![VM Debug Information Dialog](./media/state-debugger-dialog.png)
+![Screenshot shows the V M Debug Information dialog box.](./media/state-debugger-dialog.png)
-Debug information helps the Azure Remote Rendering team to analyze any issues that occur in a running ARR instance. The dialog has a text field to provide additional details, for example steps to reproduce an issue.
+Debug information helps the Azure Remote Rendering team to analyze any issues that occur in a running Remote Rendering instance. The dialog box has a text field to provide other details, for example steps to reproduce an issue.
-After clicking the **Start Collecting** button, the dialog will close and the collection process begins. Collecting the information on the VM can take a few minutes.
+After you select **Start Collecting**, the dialog box closes and the collection process begins. Collecting the information on the virtual machine can take a few minutes.
-![VM Debug Information collection in progress](./media/state-debugger-panel-in-progress.png)
+![Screenshot shows V M Debug Information collection in progress](./media/state-debugger-panel-in-progress.png)
-Once the collection is finished, you will receive a notification in the ArrInspector window. This notification contains an ID that identifies this particular collection. Be sure to save this ID to pass it on to the Azure Remote Rendering team.
+Once the collection is finished, you'll receive a notification in the ArrInspector window. This notification contains an ID for this particular collection. Be sure to save this ID to pass it on to the Azure Remote Rendering team.
-![VM Debug Information collection success](./media/state-debugger-snackbar-success.png)
+![Screenshot shows the V M Debug Information collection success message.](./media/state-debugger-snackbar-success.png)
> [!IMPORTANT]
-> You can't download or otherwise access VM debug information. Only the Azure Remote Rendering team has access to the collected data. You need to contact us and send the collection ID along, for us to investigate the issue you are seeing.
+> You can't download or otherwise access virtual machine debug information. Only the Azure Remote Rendering team has access to the collected data. You need to contact us and send the collection ID for us to investigate the issue.
## Pause mode In the top-right corner, a switch allows you to pause live update of the panels. This mode can be useful to carefully inspect a specific state.
-![Pause Mode](./media/pause-mode.png)
+![Screenshot shows the control to pause live updates.](./media/pause-mode.png)
-When re-enabling live update, all panels are reset.
+When re-enabling live updates, all panels are reset.
site-recovery Azure To Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-architecture.md
description: Overview of the architecture used when you set up disaster recovery
Previously updated : 3/13/2020 Last updated : 4/28/2022
When you enable replication for a VM, Site Recovery gives you the option of crea
**Target resource** | **Default setting** | **Target subscription** | Same as the source subscription.
-**Target resource group** | The resource group to which VMs belong after failover.<br/><br/> It can be in any Azure region except the source region.<br/><br/> Site Recovery creates a new resource group in the target region, with an "asr" suffix.<br/><br/>
+**Target resource group** | The resource group to which VMs belong after failover.<br/><br/> It can be in any Azure region except the source region.<br/><br/> Site Recovery creates a new resource group in the target region, with an "asr" suffix.
**Target VNet** | The virtual network (VNet) in which replicated VMs are located after failover. A network mapping is created between source and target virtual networks, and vice versa.<br/><br/> Site Recovery creates a new VNet and subnet, with the "asr" suffix. **Target storage account** | If the VM doesn't use a managed disk, this is the storage account to which data is replicated.<br/><br/> Site Recovery creates a new storage account in the target region, to mirror the source storage account. **Replica managed disks** | If the VM uses a managed disk, this is the managed disks to which data is replicated.<br/><br/> Site Recovery creates replica managed disks in the storage region to mirror the source.
When you enable replication for a VM, Site Recovery gives you the option of crea
You can manage target resources as follows: - You can modify target settings as you enable replication. Please note that the default SKU for the target region VM is the same as the SKU of the source VM (or the next best available SKU in comparison to the source VM SKU). The dropdown list only shows relevant SKUs of the same family as the source VM (Gen 1 or Gen 2).-- You can modify target settings after replication is already working. Similar to other resources such as the target resource group, target name, and others, the target region VM SKU can also be updated after replication is in progress. A resource which cannot be updated is the availability type (single instance, set or zone). To change this setting you need to disable replication, modify the setting, and then reenable.
+- You can modify target settings after replication is already working. Similar to other resources such as the target resource group, target name, and others, the target region VM SKU can also be updated after replication is in progress. A resource which cannot be updated is the availability type (single instance, set or zone). To change this setting, you need to disable replication, modify the setting, and then reenable.
## Replication policy
-When you enable Azure VM replication, by default Site Recovery creates a new replication policy with the default settings summarized in the table.
+When you enable Azure VM replication, Site Recovery creates a new replication policy with the default settings summarized in the table, by default.
**Policy setting** | **Details** | **Default** | |
-**Recovery point retention** | Specifies how long Site Recovery keeps recovery points | 1 day
+**Recovery point retention** | Specifies how long Site Recovery keeps recovery points. | 1 day
**App-consistent snapshot frequency** | How often Site Recovery takes an app-consistent snapshot. | 0 hours (Disabled) ### Managing replication policies
-You can manage and modify the default replication policies settings as follows:
+You can manage and modify the settings of default replication policies as follows:
- You can modify the settings as you enable replication. - You can create a replication policy at any time, and then apply it when you enable replication.
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
Title: Common questions about Azure VM disaster recovery with Azure Site Recover
description: This article answers common questions about Azure VM disaster recovery when you use Azure Site Recovery. Previously updated : 07/25/2021 Last updated : 04/28/2022
Yes. Site Recovery supports disaster recovery of VMs that have Azure Disk Encryp
- Site Recovery supports: - ADE version 0.1, which has a schema that requires Azure Active Directory (Azure AD). - ADE version 1.1, which doesn't require Azure AD. For version 1.1, Windows Azure VMs must have managed disks.
- - [Learn more](../virtual-machines/extensions/azure-disk-enc-windows.md#extension-schema). about the extension schemas.
+ - [Learn more](../virtual-machines/extensions/azure-disk-enc-windows.md#extension-schema) about the extension schemas.
[Learn more](azure-to-azure-how-to-enable-replication-ade-vms.md) about enabling replication for encrypted VMs.
Yes. Site Recovery supports disaster recovery of VMs that have Azure Disk Encryp
When you allow Site Recovery to manage updates for the Mobility service extension running on replicated Azure VMs, it deploys a global runbook (used by Azure services), via an Azure automation account. You can use the automation account that Site Recovery creates, or select to use an existing automation account.
-Currently, in the portal, you can only select an automation account in the same resource group as the vault. You can select an automation account from a different resource group using PowerShell. [Learn more](azure-to-azure-autoupdate.md#enable-automatic-updates).
+Currently, in the portal, you can only select an automation account in the same resource group as the vault. You can select an automation account from a different resource group using PowerShell. [Learn more](azure-to-azure-autoupdate.md#enable-automatic-updates) about enabling automatic updates.
### If I use a customer automation account that's not in the vault resource group, can I delete the default runbook?
Support for this is limited to a few regions. [Learn more](azure-to-azure-how-to
### Can I exclude disks from replication?
-Yes, you can exclude disks when you set up replication, using PowerShell. [Learn more](azure-to-azure-exclude-disks.md).
+Yes, you can exclude disks when you set up replication, using PowerShell. [Learn more](azure-to-azure-exclude-disks.md) about excluding disks.
### Can I replicate new disks added to replicated VMs?
Site Recovery doesn't support "hot remove" of disks from a replicated VM. If you
### How often can I replicate to Azure?
-Replication is continuous when replicating Azure VMs to another Azure region. [Learn more](./azure-to-azure-architecture.md#replication-process) about how replication works.
+Replication is continuous when replicating Azure VMs to another Azure region. [Learn more](./azure-to-azure-architecture.md#replication-process) about the replication process.
### Can I replicate virtual machines within a region?
site-recovery Azure To Azure Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-quickstart.md
Title: Set up Azure VM disaster recovery to a secondary region with Azure Site Recovery description: Quickly set up disaster recovery to another Azure region for an Azure VM, using the Azure Site Recovery service. Previously updated : 03/27/2020 Last updated : 05/02/2022
The [Azure Site Recovery](site-recovery-overview.md) service contributes to your business continuity and disaster recovery (BCDR) strategy by keeping your business applications online during planned and unplanned outages. Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VM), including replication, failover, and recovery.
-This quickstart describes how to set up disaster recovery for an Azure VM by replicating it to a secondary Azure region. In general, default settings are used to enable replication.
+This quickstart describes how to set up disaster recovery for an Azure VM by replicating it to a secondary Azure region. In general, default settings are used to enable replication. [Learn more](azure-to-azure-tutorial-enable-replication.md).
## Prerequisites
Sign in to the [Azure portal](https://portal.azure.com).
The following steps enable VM replication to a secondary location. 1. On the Azure portal, from **Home** > **Virtual machines** menu, select a VM to replicate.
-1. In **Operations** select **Disaster recovery**.
+1. In **Operations**, select **Disaster recovery**.
1. From **Basics** > **Target region**, select the target region. 1. To view the replication settings, select **Review + Start replication**. If you need to change any defaults, select **Advanced settings**.
-1. To start the job that enables VM replication select **Start replication**.
+1. To start the job that enables VM replication, select **Start replication**.
:::image type="content" source="media/azure-to-azure-quickstart/enable-replication1.png" alt-text="Enable replication."::: ## Verify settings
-After the replication job finishes, you can check the replication status, modify replication settings, and test the deployment.
+After the replication job is complete, you can check the replication status, modify replication settings, and test the deployment.
1. On the Azure portal menu, select **Virtual machines** and select the VM that you replicated.
-1. In **Operations** select **Disaster recovery**.
+1. In **Operations**, select **Disaster recovery**.
1. To view the replication details from the **Overview** select **Essentials**. More details are shown in the **Health and status**, **Failover readiness**, and the **Infrastructure view** map. :::image type="content" source="media/azure-to-azure-quickstart/replication-status.png" alt-text="Replication status.":::
To stop replication of the VM in the primary region, you must disable replicatio
- The Site Recovery extension installed on the VM during replication isn't removed. - Site Recovery billing for the VM stops.
-To disable replication, do these steps:
+To disable replication, perform these steps:
1. On the Azure portal menu, select **Virtual machines** and select the VM that you replicated.
-1. In **Operations** select **Disaster recovery**.
-1. From the **Overview**, select **Disable Replication**.
+1. In **Operations**, select **Disaster recovery**.
+1. From **Overview**, select **Disable Replication**.
1. To uninstall the Site Recovery extension, go to the VM's **Settings** > **Extensions**. :::image type="content" source="media/azure-to-azure-quickstart/disable2-replication.png" alt-text="Disable replication.":::
site-recovery Azure To Azure Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
Previously updated : 04/07/2020 Last updated : 04/29/2022 # Troubleshoot Azure-to-Azure VM replication errors
-This article describes how to troubleshoot common errors in Azure Site Recovery during replication and recovery of Azure virtual machines (VM) from one region to another. For more information about supported configurations, see the [support matrix for replicating Azure VMs](azure-to-azure-support-matrix.md).
+This article describes how to troubleshoot common errors in Azure Site Recovery during replication and recovery of [Azure virtual machines](azure-to-azure-tutorial-enable-replication.md) (VM) from one region to another. For more information about supported configurations, see the [support matrix for replicating Azure VMs](azure-to-azure-support-matrix.md).
## Azure resource quota issues (error code 150097)
Replication couldn't be enabled for the virtual machine <VmName>.
### Fix the problem
-Contact [Azure billing support](../azure-portal/supportability/regional-quota-requests.md) to enable your subscription to create VMs of the required sizes in the target location. Then, retry the failed operation.
+Contact [Azure billing support](../azure-portal/supportability/regional-quota-requests.md) to enable your subscription to create VMs of the required sizes in the target location. Then retry the failed operation.
If the target location has a capacity constraint, disable replication to that location. Then, enable replication to a different location where your subscription has sufficient quota to create VMs of the required sizes.
site-recovery Azure To Azure Tutorial Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-enable-replication.md
Title: Tutorial to set up Azure VM disaster recovery with Azure Site Recovery description: In this tutorial, set up disaster recovery for Azure VMs to another Azure region, using the Site Recovery service. Previously updated : 07/25/2021 Last updated : 04/29/2022 #Customer intent: As an Azure admin, I want to set up disaster recovery for my Azure VMs, so that they're available in a secondary region if the primary region becomes unavailable.
This tutorial shows you how to set up disaster recovery for Azure VMs using [Azu
> * Create a Recovery Services vault > * Enable VM replication
-When you enable replication for a VM to set up disaster recovery, the Site Recovery Mobility service extension installs on the VM, and registers it with Azure Site Recovery. During replication, VM disk writes are sent to a cache storage account in the source region. Data is sent from there to the target region, and recovery points are generated from the data. When you fail over a VM during disaster recovery, a recovery point is used to restore the VM in the target region.
+When you enable [replication](azure-to-azure-quickstart.md) for a VM to set up disaster recovery, the Site Recovery Mobility service extension installs on the VM, and registers it with Azure Site Recovery. During replication, VM disk writes are sent to a cache storage account in the source region. Data is sent from there to the target region, and recovery points are generated from the data. When you fail over a VM during disaster recovery, a recovery point is used to restore the VM in the target region. [Learn more](azure-to-azure-architecture.md) about the architecture.
> [!NOTE] > Tutorials provide instructions with the simplest default settings. If you want to set up Azure VM disaster recovery with customized settings, review [this article](azure-to-azure-how-to-enable-replication.md).
Before you start this tutorial:
## Check Azure settings
-Check permissions, and settings in the target region.
+Check permissions and settings in the target region.
### Check permissions
If you're using a URL-based firewall proxy to control outbound connectivity, all
#### Outbound connectivity for IP address ranges
-If you're using network security groups (NSGs) to control connectivity, create service-tag based NSG rules that allow HTTPS outbound to port 443 for these [service tags](../virtual-network/service-tags-overview.md#available-service-tags)(groups of IP addresses):
+If you're using network security groups (NSGs) to control connectivity, create a service-tag based NSG rules that allow HTTPS outbound to port 443 for these [service tags](../virtual-network/service-tags-overview.md#available-service-tags)(groups of IP addresses):
**Tag** | **Allow** |
site-recovery Physical Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-azure-architecture.md
If you're using a URL-based firewall proxy to control outbound connectivity, all
> [!NOTE] > Replication isn't supported over a site-to-site VPN from an on-premises site or Azure ExpressRoute [private peering](concepts-expressroute-with-site-recovery.md#on-premises-to-azure-replication-with-expressroute).
+For information related to troubleshooting, see [this article](vmware-azure-troubleshoot-replication.md).
+ **Physical to Azure replication process** ![Replication process](./media/physical-azure-architecture/v2a-architecture-henry.png)
site-recovery Physical Azure Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-azure-disaster-recovery.md
Title: Set up disaster recovery of physical on-premises servers with Azure Site
description: Learn how to set up disaster recovery to Azure for on-premises Windows and Linux servers, with the Azure Site Recovery service. Previously updated : 07/14/2021 Last updated : 05/02/2022
Last updated 07/14/2021
The [Azure Site Recovery](site-recovery-overview.md) service contributes to your disaster recovery strategy by managing and orchestrating replication, failover, and failback of on-premises machines, and Azure virtual machines (VMs).
-This tutorial shows you how to set up disaster recovery of on-premises physical Windows and Linux servers to Azure. In this tutorial, you learn how to:
+This tutorial shows how to set up disaster recovery of on-premises physical Windows and Linux servers to Azure. In this tutorial, you learn how to:
> [!div class="checklist"] > * Set up Azure and on-premises prerequisites
This tutorial shows you how to set up disaster recovery of on-premises physical
To complete this tutorial: -- Make sure that you understand the [architecture and components](physical-azure-architecture.md) for this scenario.
+- Make sure you understand the [architecture and components](physical-azure-architecture.md) for this scenario.
- Review the [support requirements](vmware-physical-secondary-support-matrix.md) for all components. - Make sure that the servers you want to replicate comply with [Azure VM requirements](vmware-physical-secondary-support-matrix.md#replicated-vm-support). - Prepare Azure. You need an Azure subscription, an Azure virtual network, and a storage account.
site-recovery Site Recovery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-overview.md
Title: About Azure Site Recovery description: Provides an overview of the Azure Site Recovery service, and summarizes disaster recovery and migration deployment scenarios. Previously updated : 08/19/2021 Last updated : 05/02/2022
Welcome to the Azure Site Recovery service! This article provides a quick service overview.
-As an organization you need to adopt a business continuity and disaster recovery (BCDR) strategy that keeps your data safe, and your apps and workloads online, when planned and unplanned outages occur.
+As an organization, you need to adopt a business continuity and disaster recovery (BCDR) strategy that keeps your data safe, and your apps and workloads online, when planned and unplanned outages occur.
Azure Recovery Services contributes to your BCDR strategy: -- **Site Recovery service**: Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to secondary location, and access apps from there. After the primary location is running again, you can fail back to it.
+- **Site Recovery service**: Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery [replicates](azure-to-azure-quickstart.md) workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to a secondary location, and access apps from there. After the primary location is running again, you can fail back to it.
- **Backup service**: The [Azure Backup](../backup/index.yml) service keeps your data safe and recoverable. Site Recovery can manage replication for:
Site Recovery can manage replication for:
**VMware VM replication** | You can replicate VMware VMs to Azure using the improved Azure Site Recovery replication appliance that offers better security and resilience than the configuration server. For more information, see [Disaster recovery of VMware VMs](vmware-azure-about-disaster-recovery.md). **On-premises VM replication** | You can replicate on-premises VMs and physical servers to Azure, or to a secondary on-premises datacenter. Replication to Azure eliminates the cost and complexity of maintaining a secondary datacenter. **Workload replication** | Replicate any workload running on supported Azure VMs, on-premises Hyper-V and VMware VMs, and Windows/Linux physical servers.
-**Data resilience** | Site Recovery orchestrates replication without intercepting application data. When you replicate to Azure, data is stored in Azure storage, with the resilience that provides. When failover occurs, Azure VMs are created, based on the replicated data.
+**Data resilience** | Site Recovery orchestrates replication without intercepting application data. When you replicate to Azure, data is stored in Azure storage, with the resilience that provides. When failover occurs, Azure VMs are created based on the replicated data.
**RTO and RPO targets** | Keep recovery time objectives (RTO) and recovery point objectives (RPO) within organizational limits. Site Recovery provides continuous replication for Azure VMs and VMware VMs, and replication frequency as low as 30 seconds for Hyper-V. You can reduce RTO further by integrating with [Azure Traffic Manager](https://azure.microsoft.com/blog/reduce-rto-by-using-azure-traffic-manager-with-azure-site-recovery/). **Keep apps consistent over failover** | You can replicate using recovery points with application-consistent snapshots. These snapshots capture disk data, all data in memory, and all transactions in process. **Testing without disruption** | You can easily run disaster recovery drills, without affecting ongoing replication.
site-recovery Vmware Azure Mobility Install Configuration Mgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-mobility-install-configuration-mgr.md
Previously updated : 05/27/2021 Last updated : 05/02/2022 # Automate Mobility Service installation
When you deploy Site Recovery for disaster recovery of on-premises VMware VMs an
- **Push installation**: Let Site Recovery install the Mobility service agent when you enable replication for a machine in the Azure portal. - **Manual installation**: Install the Mobility service manually on each machine. [Learn more](vmware-physical-mobility-service-overview.md) about push and manual installation.-- **Automated deployment**: Automate installation with software deployment tools such as Microsoft Endpoint Configuration Manager, or third-party tools such as JetPatch.
+- **Automated deployment**: Automate installation with software deployment tools such as Microsoft Endpoint Configuration Manager, or third-party tools such as JetPatch. [Learn more](vmware-physical-mobility-service-overview.md)
Automated installation and updating provides a solution if:
site-recovery Vmware Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-troubleshoot-replication.md
Previously updated : 08/2/2019 Last updated : 05/02/2022
When you try to select the source machine to enable replication by using Site Re
### Troubleshoot protected virtual machines greyed out in the portal
-Virtual machines that are replicated under Site Recovery aren't available in the Azure portal if there are duplicate entries in the system. To learn how to delete stale entries and resolve the issue, refer to [Azure Site Recovery VMware-to-Azure: How to clean up duplicate or stale entries](https://social.technet.microsoft.com/wiki/contents/articles/32026.asr-vmware-to-azure-how-to-cleanup-duplicatestale-entries.aspx).
+Virtual machines that are replicated under Site Recovery aren't available in the Azure portal if there are duplicate entries in the system. [Learn more](https://social.technet.microsoft.com/wiki/contents/articles/32026.asr-vmware-to-azure-how-to-cleanup-duplicatestale-entries.aspx) about deleting stale entries and resolving the issue.
## No crash consistent recovery point available for the VM in the last 'XXX' minutes
site-recovery Vmware Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial.md
Title: Set up VMware VM disaster recovery to Azure with Azure Site Recovery - Cl
description: Learn how to set up disaster recovery to Azure for on-premises VMware VMs with Azure Site Recovery - Classic. Previously updated : 11/12/2019 Last updated : 02/05/2022
This article describes how to enable replication for on-premises VMware VMs, for
For information about disaster recovery in Azure Site Recovery Preview, see [this article](vmware-azure-set-up-replication-tutorial-preview.md)
-This is the third tutorial in a series that shows you how to set up disaster recovery to Azure for on-premises VMware VMs. In the previous tutorial, we [prepared the on-premises VMware environment](vmware-azure-tutorial-prepare-on-premises.md) for disaster recovery to Azure.
+This is the third tutorial in a series that shows how to set up disaster recovery to Azure for on-premises VMware VMs. In the previous tutorial, we [prepared the on-premises VMware environment](vmware-azure-tutorial-prepare-on-premises.md) for disaster recovery to Azure.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
Complete the previous tutorials: 1. Make sure you've [set up Azure](tutorial-prepare-azure.md) for on-premises VMware disaster recovery to Azure. 2. Follow [these steps](vmware-azure-tutorial-prepare-on-premises.md) to prepare your on-premises VMware deployment for disaster recovery to Azure.
-3. In this tutorial we show you how to replicate a single VM. If you're deploying multiple VMware VMs you should use the [Deployment Planner Tool](https://aka.ms/asr-deployment-planner). [Learn more](site-recovery-deployment-planner.md) about this tool.
+3. In this tutorial, we show you how to replicate a single VM. If you're deploying multiple VMware VMs, you should use the [Deployment Planner Tool](https://aka.ms/asr-deployment-planner). [Learn more](site-recovery-deployment-planner.md) about this tool.
4. This tutorial uses a number of options you might want to do differently: - The tutorial uses an OVA template to create the configuration server VMware VM. If you can't do this for some reason, follow [these instructions](physical-manage-configuration-server.md) to set up the configuration server manually. - In this tutorial, Site Recovery automatically downloads and installs MySQL to the configuration server. If you prefer, you can set it up manually instead. [Learn more](vmware-azure-deploy-configuration-server.md#configure-settings).
Select and verify target resources.
- The policy is automatically associated with the configuration server. - A matching policy is automatically created for failback by default. For example, if the replication policy is **rep-policy**, then the failback policy is **rep-policy-failback**. This policy isn't used until you initiate a failback from Azure.
-Note: In VMware-to-Azure scenario the crash-consistent snapshot is taken at 5 min interval.
+> [!Note]
+> In VMware-to-Azure scenario the crash-consistent snapshot is taken at 5 min interval.
## Enable replication
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Title: Support matrix for VMware/physical disaster recovery in Azure Site Recovery. description: Summarizes support for disaster recovery of VMware VMs and physical server to Azure using Azure Site Recovery. Previously updated : 08/02/2021 Last updated : 05/02/2022 # Support matrix for disaster recovery of VMware VMs and physical servers to Azure
Ports | 443 used for control channel orchestration<br/>9443 for data transport
## Replicated machines
-In preview, replication is done by the Azure Site Recovery replication appliance. For detailed information about replication appliance, see [this article](deploy-vmware-azure-replication-appliance-preview.md)
+In preview, replication is done by the Azure Site Recovery replication appliance. For detailed information about replication appliance, see [this article](deploy-vmware-azure-replication-appliance-preview.md).
Site Recovery supports replication of any workload running on a supported machine.
Site Recovery supports replication of any workload running on a supported machin
| Machine settings | Machines that replicate to Azure must meet [Azure requirements](#azure-vm-requirements). Machine workload | Site Recovery supports replication of any workload running on a supported machine. [Learn more](./site-recovery-workload.md).
-Machine name | Ensure that the display name of machine does not fall into [Azure reserved resource names](../azure-resource-manager/templates/error-reserved-resource-name.md)<br/><br/> Logical volume names are not case-sensitive. Ensure that no two volumes on a device have same name. Ex: Volumes with names "voLUME1", "volume1" cannot be protected through Azure Site Recovery.
+Machine name | Ensure that the display name of machine does not fall into [Azure reserved resource names](../azure-resource-manager/templates/error-reserved-resource-name.md).<br/><br/> Logical volume names are not case-sensitive. Ensure that no two volumes on a device have same name. For example, Volumes with names "voLUME1", "volume1" cannot be protected through Azure Site Recovery.
### For Windows
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure | 20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 5.4.0-26-generic to 5.4.0-80 </br> 5.4.0-1010-azure to 5.4.0-1048-azure </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
-**Note: For Ubuntu 20.04, we had initially rolled out support for kernels 5.8.* but we have since found issues with support for this kernel and hence have redacted these kernels from our support statement for the time being.
+> [!Note]
+> - For Ubuntu 20.04, we had initially rolled out support for kernels 5.8. But we have since found issues with support for this kernel and hence have redacted these kernels from our support statement for the time being.
### Debian kernel versions
Multi-queue block IO devices | Not supported.
Physical servers with the HP CCISS storage controller | Not supported. Device/Mount point naming convention | Device name or mount point name should be unique.<br/> Ensure that no two devices/mount points have case-sensitive names. For example, naming devices for the same VM as *device1* and *Device1* isn't supported. Directories | If you're running a version of the Mobility service earlier than version 9.20 (released in [Update Rollup 31](https://support.microsoft.com/help/4478871/)), then these restrictions apply:<br/><br/> - These directories (if set up as separate partitions/file-systems) must be on the same OS disk on the source server: /(root), /boot, /usr, /usr/local, /var, /etc.</br> - The /boot directory should be on a disk partition and not be an LVM volume.<br/><br/> From version 9.20 onwards, these restrictions don't apply.
-Boot directory | - Boot disks with GPT partition format are supported. GPT disks are also supported as data disks.<br/><br/> Multiple boot disks on a VM aren't supported<br/><br/> - /boot on an LVM volume across more than one disk isn't supported.<br/> - A machine without a boot disk can't be replicated.
+Boot directory | - Boot disks with GPT partition format are supported. GPT disks are also supported as data disks.<br/><br/> Multiple boot disks on a VM aren't supported.<br/><br/> - /boot on an LVM volume across more than one disk isn't supported.<br/> - A machine without a boot disk can't be replicated.
Free space requirements| 2 GB on the /root partition <br/><br/> 250 MB on the installation folder XFSv5 | XFSv5 features on XFS file systems, such as metadata checksum, are supported (Mobility service version 9.10 onwards).<br/> Use the xfs_info utility to check the XFS superblock for the partition. If `ftype` is set to 1, then XFSv5 features are in use. BTRFS | BTRFS is supported from [Update Rollup 34](https://support.microsoft.com/help/4490016) (version 9.22 of the Mobility service) onwards. BTRFS isn't supported if:<br/><br/> - The BTRFS file system subvolume is changed after enabling protection.</br> - The BTRFS file system is spread over multiple disks.</br> - The BTRFS file system supports RAID.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Previously updated : 08/19/2021 Last updated : 04/28/2022 # About the Mobility service for VMware VMs and physical servers
-When you set up disaster recovery for VMware virtual machines (VM) and physical servers using [Azure Site Recovery](site-recovery-overview.md), you install the Site Recovery Mobility service on each on-premises VMware VM and physical server. The Mobility service captures data writes on the machine, and forwards them to the Site Recovery process server. The Mobility service is installed by the Mobility service agent software that you can deploy using the following methods:
+When you set up disaster recovery for VMware virtual machines (VM) and physical servers using [Azure Site Recovery](site-recovery-overview.md), you install the Site Recovery Mobility service on each on-premises VMware VM and physical server. The Mobility service captures data, writes on the machine, and forwards them to the Site Recovery process server. The Mobility service is installed by the Mobility service agent software that you can deploy using the following methods:
- [Push installation](#push-installation): When protection is enabled via the Azure portal, Site Recovery installs the Mobility service on the server. - Manual installation: You can install the Mobility service manually on each machine through the [user interface (UI)](#install-the-mobility-service-using-ui-classic) or [command prompt](#install-the-mobility-service-using-command-prompt-classic).
Push installation is an integral part of the job that's run from the Azure porta
- Ensure that all push installation [prerequisites](vmware-azure-install-mobility-service.md) are met. - Ensure that all server configurations meet the criteria in the [Support matrix for disaster recovery of VMware VMs and physical servers to Azure](vmware-physical-azure-support-matrix.md).-- From 9.36 version onwards, for SUSE Linux Enterprise Server 11 SP3, RHEL 5, CentOS 5, Debian 7 ensure the latest installer is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-rhel-5-debian-7-server)
+- From 9.36 version onwards, ensure the latest installer for SUSE Linux Enterprise Server 11 SP3, RHEL 5, CentOS 5, Debian 7 is [available on the configuration server and scale-out process server](#download-latest-mobility-agent-installer-for-suse-11-sp3-rhel-5-debian-7-server).
The push installation workflow is described in the following sections: ### Mobility service agent version 9.23 and higher
-For more information about version 9.23 see [Update Rollup 35 for Azure Site Recovery](https://support.microsoft.com/help/4494485/update-rollup-35-for-azure-site-recovery).
+For more information about version 9.23, see [Update Rollup 35 for Azure Site Recovery](https://support.microsoft.com/help/4494485/update-rollup-35-for-azure-site-recovery).
During a push installation of the Mobility service, the following steps are performed:
-1. The agent is pushed to the source machine. Copying the agent to the source machine can fail due to multiple environmental errors. Visit [our guidance](vmware-azure-troubleshoot-push-install.md) to troubleshoot push installation failures.
+1. The agent is pushed to the source machine. Copying the agent to the source machine can fail due to multiple environmental errors. Refer to [our guidance](vmware-azure-troubleshoot-push-install.md) to troubleshoot push installation failures.
1. After the agent is successfully copied to the server, a prerequisite check is performed on the server. - If all prerequisites are met, the installation begins.
- - The installation fails if one or more of the [prerequisites](vmware-physical-azure-support-matrix.md) aren't met.
+ - If one or more [prerequisites](vmware-physical-azure-support-matrix.md) aren't met, the installation fails.
1. As part of the agent installation, the Volume Shadow Copy Service (VSS) provider for Azure Site Recovery is installed. The VSS provider is used to generate application-consistent recovery points. If installation of the VSS provider fails, this step is skipped and the agent installation continues. 1. If the agent installation succeeds but the VSS provider installation fails, then the job status is marked as **Warning**. This doesn't impact crash-consistent recovery point generation.
Syntax | `cd /usr/local/ASR/Vx/bin<br/><br/> UnifiedAgentConfigurator.sh -i \<CS
## Locate installer files
-On the configuration server go to the folder _%ProgramData%\ASR\home\svsystems\pushinstallsvc\repository_. Check which installer you need based on the operating system. The following table summarizes the installer files for each VMware VM and physical server operating system. Before you begin, you can review the [supported operating systems](vmware-physical-azure-support-matrix.md#replicated-machines).
+On the configuration server, go to the folder _%ProgramData%\ASR\home\svsystems\pushinstallsvc\repository_. Check which installer you need based on the operating system. The following table summarizes the installer files for each VMware VM and physical server operating system. Before you begin, you can review the [supported operating systems](vmware-physical-azure-support-matrix.md#replicated-machines).
> [!NOTE] > The file names use the syntax shown in the following table with _version_ and _date_ as placeholders for the real values. The actual file names will look similar to these examples:
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
## Generate Mobility Service configuration file
- use the following steps to generate mobility service configuration file:
+ Use the following steps to generate mobility service configuration file:
1. Navigate to the appliance with which you want to register your source machine. Open the Microsoft Azure Appliance Configuration Manager and navigate to the section **Mobility service configuration details**. 2. Paste the Machine Details string that you copied from Mobility Service and paste it in the input field here.
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
![Image showing download configuration file option for Mobility Service](./media/vmware-physical-mobility-service-overview-preview/download-configuration-file.png)
-This will download the Mobility Service configuration file. Copy this file to a local folder in your source machine. You can place it in the same folder as the Mobility Service installer.
+This downloads the Mobility Service configuration file. Copy the downloaded file to a local folder in your source machine. You can place it in the same folder as the Mobility Service installer.
See information about [upgrading the mobility services](upgrade-mobility-service-preview.md).
storage Storage Files Netapp Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-netapp-comparison.md
Previously updated : 11/15/2021 Last updated : 05/04/2022
Most workloads that require cloud file storage work well on either Azure Files o
| Category | Azure Files | Azure NetApp Files | ||-|| | Description | [Azure Files](https://azure.microsoft.com/services/storage/files/) is a fully managed, highly available, enterprise-grade service that is optimized for random access workloads with in-place data updates.<br><br> Azure Files is built on the same Azure storage platform as other services like Azure Blobs. | [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) is a fully managed, highly available, enterprise-grade NAS service that can handle the most demanding, high-performance, low-latency workloads requiring advanced data management capabilities. It enables the migration of workloads, which are deemed "un-migratable" without.<br><br> ANF is built on NetApp's bare metal with ONTAP storage OS running inside the Azure datacenter for a consistent Azure experience and an on-premises like performance. |
-| Protocols | Premium<br><ul><li>SMB 2.1, 3.0, 3.1.1</li><li>NFS 4.1</li><li>REST</li></ul><br>Standard<br><ul><li>SMB 2.1, 3.0, 3.1.1</li><li>REST</li></ul><br> To learn more, see [available file share protocols](./storage-files-planning.md#available-protocols). | All tiers<br><ul><li>SMB 2.x, 3.x</li><li>NFS 3.0, 4.1</li><li>Dual protocol access (NFSv3/SMB)</li></ul><br> To learn more, see how to create [NFS](../../azure-netapp-files/azure-netapp-files-create-volumes.md), [SMB](../../azure-netapp-files/azure-netapp-files-create-volumes-smb.md), or [dual-protocol](../../azure-netapp-files/create-volumes-dual-protocol.md) volumes. |
-| Region Availability | Premium<br><ul><li>30+ Regions</li></ul><br>Standard<br><ul><li>All regions</li></ul><br> To learn more, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage). | All tiers<br><ul><li>28+ Regions</li></ul><br> To learn more, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage). |
+| Protocols | Premium<br><ul><li>SMB 2.1, 3.0, 3.1.1</li><li>NFSv4.1</li><li>REST</li></ul><br>Standard<br><ul><li>SMB 2.1, 3.0, 3.1.1</li><li>REST</li></ul><br> To learn more, see [available file share protocols](./storage-files-planning.md#available-protocols). | All tiers<br><ul><li>SMB 2.1, 3.x (including SMB Continuous Availability optionally)</li><li>NFSv3, NFSv4.1</li><li>Dual protocol access (NFSv3/SMB and NFSv4.1/SMB)</li></ul><br> To learn more, see how to create [NFS](../../azure-netapp-files/azure-netapp-files-create-volumes.md), [SMB](../../azure-netapp-files/azure-netapp-files-create-volumes-smb.md), or [dual-protocol](../../azure-netapp-files/create-volumes-dual-protocol.md) volumes. |
+| Region Availability | Premium<br><ul><li>30+ Regions</li></ul><br>Standard<br><ul><li>All regions</li></ul><br> To learn more, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage). | All tiers<br><ul><li>35+ Regions</li></ul><br> To learn more, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage). |
| Redundancy | Premium<br><ul><li>LRS</li><li>ZRS</li></ul><br>Standard<br><ul><li>LRS</li><li>ZRS</li><li>GRS</li><li>GZRS</li></ul><br> To learn more, see [redundancy](./storage-files-planning.md#redundancy). | All tiers<br><ul><li>Built-in local HA</li><li>[Cross-region replication](../../azure-netapp-files/cross-region-replication-introduction.md)</li></ul> | | Service-Level Agreement (SLA)<br><br> Note that SLAs for Azure Files and Azure NetApp Files are calculated differently. | [SLA for Azure Files](https://azure.microsoft.com/support/legal/sla/storage/) | [SLA for Azure NetApp Files](https://azure.microsoft.com/support/legal/sla/netapp) |
-| Identity-Based Authentication and Authorization | SMB<br><ul><li>Active Directory Domain Services (AD DS)</li><li>Azure Active Directory Domain Services (Azure AD DS)</li></ul><br> Note that identify-based authentication is only supported when using SMB protocol. To learn more, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control). | SMB<br><ul><li>Active Directory Domain Services (AD DS)</li><li>Azure Active Directory Domain Services (Azure AD DS)</li></ul><br> NFS/SMB dual protocol<ul><li>ADDS/LDAP integration</li></ul><br>NFSv3/NFSv4.1<ul><li>ADDS/LDAP integration with NFS extended groups [(preview)](../../azure-netapp-files/configure-ldap-extended-groups.md)</li></ul><br> To learn more, see [Azure NetApp Files NFS FAQ](../../azure-netapp-files/faq-nfs.md) and [Azure NetApp Files SMB FAQ](../../azure-netapp-files/faq-smb.md). |
+| Identity-Based Authentication and Authorization | SMB<br><ul><li>Active Directory Domain Services (AD DS)</li><li>Azure Active Directory Domain Services (Azure AD DS)</li></ul><br> Note that identify-based authentication is only supported when using SMB protocol. To learn more, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control). | SMB<br><ul><li>Active Directory Domain Services (AD DS)</li><li>Azure Active Directory Domain Services (Azure AD DS)</li></ul><br> NFS/SMB dual protocol<ul><li>ADDS/LDAP integration</li><li>[ADD/LDAP over TLS](../../azure-netapp-files/configure-ldap-over-tls.md)</li></ul><br>NFSv3/NFSv4.1<ul><li>[ADDS/LDAP integration with NFS extended groups](../../azure-netapp-files/configure-ldap-extended-groups.md)</li></ul><br> To learn more, see [Azure NetApp Files NFS FAQ](../../azure-netapp-files/faq-nfs.md) and [Azure NetApp Files SMB FAQ](../../azure-netapp-files/faq-smb.md). |
| Encryption | All protocols<br><ul><li>Encryption at rest (AES-256) with customer or Microsoft-managed keys</li></ul><br>SMB<br><ul><li>Kerberos encryption using AES-256 (recommended) or RC4-HMAC</li><li>Encryption in transit</li></ul><br>REST<br><ul><li>Encryption in transit</li></ul><br> To learn more, see [Security and networking](files-nfs-protocol.md#security-and-networking). | All protocols<br><ul><li>Encryption at rest (AES-256) with Microsoft-managed keys </li></ul><br>SMB<ul><li>Encryption in transit using AES-CCM (SMB 3.0) and AES-GCM (SMB 3.1.1)</li></ul><br>NFS 4.1<ul><li>Encryption in transit using Kerberos with AES-256</li></ul><br> To learn more, see [security FAQ](../../azure-netapp-files/faq-security.md). |
-| Access Options | <ul><li>Internet</li><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>Azure File Sync</li></ul><br> To learn more, see [network considerations](./storage-files-networking-overview.md). | <ul><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[HPC Cache](../../hpc-cache/hpc-cache-overview.md)</li></ul><br> To learn more, see [network considerations](../../azure-netapp-files/azure-netapp-files-network-topologies.md). |
-| Data Protection | <ul><li>Incremental snapshots</li><li>File/directory user self-restore</li><li>Restore to new location</li><li>In-place revert</li><li>Share-level soft delete</li><li>Azure Backup integration</li></ul><br> To learn more, see [Azure Files enhances data protection capabilities](https://azure.microsoft.com/blog/azure-files-enhances-data-protection-capabilities/). | <ul><li>Snapshots (255/volume)</li><li>File/directory user self-restore</li><li>Restore to new volume</li><li>In-place revert</li><li>[Cross-Region Replication](../../azure-netapp-files/cross-region-replication-introduction.md) </li></ul><br> To learn more, see [How Azure NetApp Files snapshots work](../../azure-netapp-files/snapshots-introduction.md). |
+| Access Options | <ul><li>Internet</li><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>Azure File Sync</li></ul><br> To learn more, see [network considerations](./storage-files-networking-overview.md). | <ul><li>Secure VNet access</li><li>VPN Gateway</li><li>ExpressRoute</li><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[HPC Cache](../../hpc-cache/hpc-cache-overview.md)</li><li>[Standard Network Features](../../azure-netapp-files/azure-netapp-files-network-topologies.md#configurable-network-features)</li></ul><br> To learn more, see [network considerations](../../azure-netapp-files/azure-netapp-files-network-topologies.md). |
+| Data Protection | <ul><li>Incremental snapshots</li><li>File/directory user self-restore</li><li>Restore to new location</li><li>In-place revert</li><li>Share-level soft delete</li><li>Azure Backup integration</li></ul><br> To learn more, see [Azure Files enhances data protection capabilities](https://azure.microsoft.com/blog/azure-files-enhances-data-protection-capabilities/). | <ul><li>[Azure NetApp Files backup](../../azure-netapp-files/backup-introduction.md)</li><li>Snapshots (255/volume)</li><li>File/directory user self-restore</li><li>Restore to new volume</li><li>In-place revert</li><li>[Cross-Region Replication](../../azure-netapp-files/cross-region-replication-introduction.md)</li></ul><br> To learn more, see [How Azure NetApp Files snapshots work](../../azure-netapp-files/snapshots-introduction.md). |
| Migration Tools | <ul><li>Azure Data Box</li><li>Azure File Sync</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li></ul><br> To learn more, see [Migrate to Azure file shares](./storage-files-migration-overview.md). | <ul><li>[Global File Cache](https://cloud.netapp.com/global-file-cache/azure)</li><li>[CloudSync](https://cloud.netapp.com/cloud-sync-service), [XCP](https://xcp.netapp.com/)</li><li>Storage Migration Service</li><li>AzCopy</li><li>Robocopy</li><li>Application-based (for example, HSR, Data Guard, AOAG)</li></ul> | | Tiers | <ul><li>Premium</li><li>Transaction Optimized</li><li>Hot</li><li>Cool</li></ul><br> To learn more, see [storage tiers](./storage-files-planning.md#storage-tiers). | <ul><li>Ultra</li><li>Premium</li><li>Standard</li></ul><br> All tiers provide sub-ms minimum latency.<br><br> To learn more, see [Service Levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) and [Performance Considerations](../../azure-netapp-files/azure-netapp-files-performance-considerations.md). | | Pricing | [Azure Files Pricing](https://azure.microsoft.com/pricing/details/storage/files/) | [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/) |
stream-analytics Sql Database Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-database-output-managed-identity.md
Previously updated : 11/30/2020 Last updated : 05/04/2022
-# Use managed identities to access Azure SQL Database or Azure Synapse Analytics from an Azure Stream Analytics job (Preview)
+# Use managed identities to access Azure SQL Database or Azure Synapse Analytics from an Azure Stream Analytics job
Azure Stream Analytics supports [Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for Azure SQL Database and Azure Synapse Analytics output sinks. Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate due to password changes or user token expirations that occur every 90 days. When you remove the need to manually authenticate, your Stream Analytics deployments can be fully automated.
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 03/30/2022 Last updated : 05/04/2022
Azure Virtual Desktop updates regularly. This article is where you'll find out a
Make sure to check back here often to keep up with new updates.
+## April 2022
+
+Here's what changed in April 2022:
+
+### Intune device configuration for Windows multisession now generally available
+
+Deploying Intune device configuration policies from Microsoft Endpoint Manager admin center to Windows multisession VMs on Azure Virtual Desktop is now generally available. Learn more at [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/azure-virtual-desktop-multi-session) and[ our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/intune-device-configuration-for-azure-virtual-desktop-multi/ba-p/3294444).
+
+### Scheduled Agent Updates public preview
+
+Scheduled Agent Updates is a new feature in public preview that lets IT admins specify the time and day the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent will update. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/scheduled-agent-updates-is-now-in-public-preview-on-azure/m-p/3285874).
+
+### RDP Shortpath for public networks now in public preview
+
+A new feature for RDP Shortpath is now in public preview. With this feature, RDP Shortpath can provide a direct UDP-based network transport for user sessions over public networks. Learn more at [Azure Virtual Desktop RDP Shortpath for public networks (preview)](shortpath-public.md) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/announcing-public-preview-of-azure-virtual-desktop-rdp-shortpath/m-p/3284763).
+
+### The Azure Virtual Desktop web client has a new URL
+
+Starting April 18, 2022, the Azure Virtual Desktop and Azure Virtual Desktop (classic) web clients will redirect to a new URL. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/reminder-the-avd-web-client-will-be-moving-to-a-new-url/m-p/3278231).
+ ## March 2022 Here's what changed in March 2022: ### Live Captions with Teams on Azure Virtual Desktop now generally available
-Accessibility has always been important to us, so we are pleased to announce that Teams for Azure Virtual Desktop now supports real-time captions. Learn how to use live captions at [Use live captions in a Teams meeting](https://support.microsoft.com/en-us/office/use-live-captions-in-a-teams-meeting-4be2d304-f675-4b57-8347-cbd000a21260). For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-live-captions-is-now-generally-available-on/ba-p/3264148).
+Accessibility has always been important to us, so we are pleased to announce that Teams for Azure Virtual Desktop now supports real-time captions. Learn how to use live captions at [Use live captions in a Teams meeting](https://support.microsoft.com/office/use-live-captions-in-a-teams-meeting-4be2d304-f675-4b57-8347-cbd000a21260). For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-live-captions-is-now-generally-available-on/ba-p/3264148).
### Multimedia redirection enhancements now in public preview
We've fixed two bugs in the Azure portal user experience:
### FSLogix client, version 2009
-We've released a new version of the FSLogix client with many fixes and improvements. Learn more at [our blog post](https://social.msdn.microsoft.com/Forums/en-US/defe5828-fba4-4715-a68c-0e4d83eefa6b/release-notes-for-fslogix-apps-release-2009-29762130127?forum=FSLogix).
+We've released a new version of the FSLogix client with many fixes and improvements. Learn more at [our blog post](https://social.msdn.microsoft.com/Forums/defe5828-fba4-4715-a68c-0e4d83eefa6b/release-notes-for-fslogix-apps-release-2009-29762130127?forum=FSLogix).
### RDP Shortpath public preview
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
As a new rollout is triggered every month, a VM will receive at least one patch
| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-Server-Core | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core |
+| MicrosoftWindowsServer | WindowsServer | 2022-datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-core |
+| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition |
+| MicrosoftWindowsServer | WindowsServer | 2022-datacenter-azure-edition-smalldisk |
## Patch orchestration modes VMs on Azure now support the following patch orchestration modes:
virtual-machines Attach Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/attach-disk-portal.md
sdc 3:0:0:0 4G
In this example, the disk that I added is `sdc`. It is a LUN 0 and is 4GB.
-For a more complex example, here is what multiple data disks looks like in the portal:
+For a more complex example, here is what multiple data disks look like in the portal:
:::image type="content" source="./media/attach-disk-portal/find-disk.png" alt-text="Screenshot of multiple disks shown in the portal.":::
sde 3:0:0:2 32G
From the output of `lsblk` you can see that the 4GB disk at LUN 0 is `sdc`, the 16GB disk at LUN 1 is `sdd`, and the 32G disk at LUN 2 is `sde`.
-### Partition a new disk
+### Prepare a new empty disk
-If you are using an existing disk that contains data, skip to mounting the disk. If you are attaching a new disk, you need to partition the disk.
+> [!IMPORTANT]
+> If you are using an existing disk that contains data, skip to [mounting the disk](#mount-the-disk).
+> The following instuctions will delete data on the disk.
-The `parted` utility can be used to partition and to format a data disk.
+If you are attaching a new disk, you need to partition the disk.
-> [!NOTE]
-> It is recommended that you use the latest version `parted` that is available for your distro.
-> If the disk size is 2 tebibytes (TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.
+The `parted` utility can be used to partition and to format a data disk.
+- It is recommended that you use the latest version `parted` that is available for your distro.
+- If the disk size is 2 tebibytes (TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.
The following example uses `parted` on `/dev/sdc`, which is where the first data disk will typically be on most VMs. Replace `sdc` with the correct option for your disk. We are also formatting it using the [XFS](https://xfs.wiki.kernel.org/) filesystem.
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. When you are ready, select **Create**.
-1. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myKey.pem**. Make sure you know where the `.pem` file was downloaded, you will need the path to it in the next step.
+1. When the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myKey.pem**. Make sure you know where the `.pem` file was downloaded; you will need the path to it in the next step.
1. When the deployment is finished, select **Go to resource**.
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/time-sync.md
Previously updated : 09/03/2021 Last updated : 05/04/2022
Historically, most Azure Marketplace images with Linux have been configured in o
To confirm ntpd is synchronizing correctly, run the `ntpq -p` command.
-Starting in early calendar 2021, the most current Azure Marketplace images with Linux are being changed to use chronyd as the time sync service,
-and chronyd is configured to synchronize against the Azure host rather than an external NTP time source. The Azure host time is usually the best time source to synchronize
-against, as it is maintained very accurately and reliably, and is accessible without the variable network delays inherent in accessing an external NTP time source
-over the public internet.
+Some Azure Marketplace images with Linux are being changed to use chronyd as the time sync service, and chronyd is configured to synchronize against the Azure host rather than an external NTP time source. The Azure host time is usually the best time source to synchronize against, as it is maintained very accurately and reliably, and is accessible without the variable network delays inherent in accessing an external NTP time source over the public internet.
The VMICTimeSync is used in parallel and provides two functions: - Immediately updates the Linux VM time-of-day clock after a host maintenance event
cat /sys/class/ptp/ptp0/clock_name
This should return `hyperv`, meaning the Azure host.
-In Linux VMs with Accelerated Networking enabled, you may see multiple PTP devices listed because the Mellanox mlx5 driver also creates a /dev/ptp device.
-Because the initialization order can be different each time Linux boots, the PTP device corresponding to the Azure host might be /dev/ptp0 or it might be /dev/ptp1, which makes
-it difficult to configure chronyd with the correct clock source. To solve this problem, the most recent Linux images have a udev rule that creates the
-symlink /dev/ptp_hyperv to whichever /dev/ptp entry corresponds to the Azure host. Chrony should be configured to use this symlink instead of /dev/ptp0 or /dev/ptp1.
+In Linux VMs with Accelerated Networking enabled, you may see multiple PTP devices listed because the Mellanox mlx5 driver also creates a /dev/ptp device. Because the initialization order can be different each time Linux boots, the PTP device corresponding to the Azure host might be `/dev/ptp0` or it might be `/dev/ptp1`, which makes it difficult to configure `chronyd` with the correct clock source. To solve this problem, the most recent Linux images have a `udev` rule that creates the symlink `/dev/ptp_hyperv` to whichever `/dev/ptp` entry corresponds to the Azure host. Chrony should be configured to use this symlink instead of `/dev/ptp0` or `/dev/ptp1`.
### chrony
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
$vm = Set-AzVMOSDisk -VM $vm `
-StorageAccountTypeΓÇ»"StandardSSD_LRS"ΓÇ»` -CreateOptionΓÇ»"FromImage"
-$vm = Set-AzVmSecurityType -VM $vm `
+$vm = Set-AzVmSecurityProfile -VM $vm `
-SecurityType "TrustedLaunch" $vm = Set-AzVmUefi -VM $vm `
virtual-machines Vm Naming Conventions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-naming-conventions.md
This page outlines the naming conventions used for Azure VMs. VMs use these nami
| *Sub-family | Used for specialized VM differentiations only| | # of vCPUs| Denotes the number of vCPUs of the VM | | *Constrained vCPUs| Used for certain VM sizes only. Denotes the number of vCPUs for the [constrained vCPU capable size](./constrained-vcpu.md) |
-| Additive Features | One or more lower case letters denote additive features, such as: <br> a = AMD-based processor <br> d = diskful (i.e., a local temp disk is present); this is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage e.g. M128, M64, etc.)<br> |
+| Additive Features | One or more lower case letters denote additive features, such as: <br> a = AMD-based processor <br> b = Block Storage performance <br> d = diskful (i.e., a local temp disk is present); this is for newer Azure VMs, see [Ddv4 and Ddsv4-series](./ddv4-ddsv4-series.md) <br> i = isolated size <br> l = low memory; a lower amount of memory than the memory intensive size <br> m = memory intensive; the most amount of memory in a particular size <br> t = tiny memory; the smallest amount of memory in a particular size <br> s = Premium Storage capable, including possible use of [Ultra SSD](./disks-types.md#ultra-disks) (Note: some newer sizes without the attribute of s can still support Premium Storage e.g. M128, M64, etc.)<br> |
| *Accelerator Type | Denotes the type of hardware accelerator in the specialized/GPU SKUs. Only the new specialized/GPU SKUs launched from Q3 2020 will have the hardware accelerator in the name. | | Version | Denotes the version of the VM Family Series |
virtual-machines Change Availability Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/change-availability-set.md
This article was last tested on 2/12/2019 using the [Azure Cloud Shell](https://
## Change the availability set The following script provides an example of gathering the required information, deleting the original VM and then recreating it in a new availability set.
+The below scenario also covers an optional portion where we create a snapshot of the VM's OS disk in order to create disk from the snapshot to have a backup because when the VM gets deleted, the OS disk will also be deleted along with it.
```powershell # Set variables $resourceGroup = "myResourceGroup" $vmName = "myVM" $newAvailSetName = "myAvailabilitySet"
+ $snapshotName = "MySnapShot"
# Get the details of the VM to be moved to the Availability Set $originalVM = Get-AzVM `
The following script provides an example of gathering the required information,
-PlatformUpdateDomainCount 2 ` -Sku Aligned }-
+
+# Get Current VM OS Disk metadata
+ $osDiskid = $originalVM.StorageProfile.OsDisk.ManagedDisk.Id
+ $osDiskName = $originalVM.StorageProfile.OsDisk.Name
+
+# Create Disk Snapshot (optional)
+ $snapshot = New-AzSnapshotConfig -SourceUri $osDiskid `
+ -Location $originalVM.Location `
+ -CreateOption copy
+
+ $newsnap = New-AzSnapshot `
+ -Snapshot $snapshot `
+ -SnapshotName $snapshotName `
+ -ResourceGroupName $resourceGroup
+
# Remove the original VM Remove-AzVM -ResourceGroupName $resourceGroup -Name $vmName
+# Create disk out of snapshot (optional)
+ $osDisk = New-AzDisk -DiskName $osDiskName -Disk `
+ (New-AzDiskConfig -Location $originalVM.Location -CreateOption Copy `
+ -SourceResourceId $newsnap.Id) `
+ -ResourceGroupName $resourceGroup
+ # Create the basic configuration for the replacement VM. $newVM = New-AzVMConfig ` -VMName $originalVM.Name `
The following script provides an example of gathering the required information,
-Location $originalVM.Location ` -VM $newVM ` -DisableBginfoExtension
+
+# Delete Snapshot (optional)
+ Remove-AzSnapshot -ResourceGroupName $resourceGroup -SnapshotName $snapshotName -Force
``` ## Next steps
virtual-network-manager How To Create Hub And Spoke Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke-powershell.md
Deploy-AzNetworkManagerCommit @deployment
## Confirm deployment
-1. Go to one of the virtual networks in the portal and select **Peerings** under *Settings*. You should see a new peering connection create between the hub and the spokes virtual network with *AVNM* in the name.
+1. Go to one of the virtual networks in the portal and select **Peerings** under *Settings*. You should see a new peering connection create between the hub and the spokes virtual network with *ANM* in the name.
1. To test *direct connectivity* between spokes, deploy a virtual machine into each spokes virtual network. Then start an ICMP request from one virtual machine to the other.
virtual-network-manager How To Create Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke.md
Title: 'Create a hub and spoke topology with Azure Virtual Network Manager (Preview)' description: Learn how to create a hub and spoke network topology with Azure Virtual Network Manager.--++ Previously updated : 11/02/2021 Last updated : 05/03/2022
This section will help you create a network group containing the virtual network
1. Go to your Azure Virtual Network Manager instance. This how-to guide assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
-1. Select **Network groups** under *Settings*, and then select **+ Add** to create a new network group.
+1. Select **Network groups** under *Settings*, and then select **+ Create** to create a new network group.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of add a network group button.":::
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-network-group.png" alt-text="Screenshot of Create a network group button.":::
-1. On the *Basics* tab, enter a **Name** and a **Description** for the network group.
+1. On the *Create a network group* page, enter a **Name** and a **Description** for the network group. Then select **Add** to create the network group.
- :::image type="content" source="./media/how-to-create-hub-and-spoke/basics.png" alt-text="Screenshot of basics tab for add a network group.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
-1. To add virtual network manually, select the **Static group members** tab. For more information, see [static members](concept-network-groups.md#static-membership).
+1. You'll see the new network group added to the *Network Groups* page.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
- :::image type="content" source="./media/how-to-create-hub-and-spoke/static-group.png" alt-text="Screenshot of static group members tab.":::
+1. From the list of network groups, select **myNetworkGroup** to manage the network group memberships.
-1. To add virtual networks dynamically, select the **Conditional statements** tab. For more information, see [dynamic membership](concept-network-groups.md#dynamic-membership).
+ :::image type="content" source="media/how-to-create-mesh-network/manage-group-membership.png" alt-text="Screenshot of manage group memberships page.":::
- :::image type="content" source="./media/how-to-create-hub-and-spoke/conditional-statements.png" alt-text="Screenshot of conditional statements tab.":::
+1. To add a virtual network manually, select the **Add** button under *Static membership*, and select the virtual networks to add. Then select **Add** to save the static membership. For more information, see [static members](concept-network-groups.md#static-membership).
-1. Once you're satisfied with the virtual networks selected for the network group, select **Review + create**. Then select **Create** once validation has passed.
+ :::image type="content" source="./media/how-to-create-hub-and-spoke/add-static-members.png" alt-text="Screenshot of add virtual networks to network group page.":::
+
+1. To add virtual networks dynamically, select the **Define** button under *Define dynamic membership*, and then enter the conditional statements for membership. Select **Save** to save the dynamic membership conditions. For more information, see [dynamic membership](concept-network-groups.md#dynamic-membership).
+
+ :::image type="content" source="media/how-to-create-mesh-network/define-dynamic-members.png" alt-text="Screenshot of Define dynamic membership page.":::
## Create a hub and spoke connectivity configuration This section will guide you through how to create a hub-and-spoke configuration with the network group you created in the previous section.
-1. Select **Configuration** under *Settings*, then select **+ Add a configuration**.
+1. Select **Configuration** under *Settings*, then select **+ Create**.
- :::image type="content" source="./media/how-to-create-hub-and-spoke/configuration-list.png" alt-text="Screenshot of the configurations list.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of the configurations list.":::
-1. Select **Connectivity** from the drop-down menu.
+1. Select **Connectivity configuration** from the drop-down menu.
:::image type="content" source="./media/create-virtual-network-manager-portal/configuration-menu.png" alt-text="Screenshot of configuration drop-down menu.":::
-1. On the *Add a connectivity configuration* page, enter, or select the following information:
+1. On the *Add a connectivity configuration* page, enter the following information:
- :::image type="content" source="./media/how-to-create-hub-and-spoke/connectivity-configuration.png" alt-text="Screenshot of add a connectivity configuration page.":::
+ :::image type="content" source="media/how-to-create-mesh-network/add-config-name.png" alt-text="Screenshot of add a connectivity configuration page.":::
| Setting | Value | | - | -- | | Name | Enter a *name* for this configuration. | | Description | *Optional* Enter a description about what this configuration will do. |
- | Topology | Select the **Hub and spoke** topology. |
- | Hub | Select a virtual network that will act as the hub virtual network. |
- | Existing peerings | Select this checkbox if you want to remove all previously created VNet peering between virtual networks in the network group defined in this configuration. |
-1. Then select **+ Add network groups**.
+1. Select **Next: Topology >**. Select **Hub and Spoke** under the **Topology** setting. This selection will reveal more settings.
+
+ :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/hub-configuration.png" alt-text="Screenshot of selecting a hub for the connectivity configuration.":::
+
+1. Select **Select a hub** under **Hub** setting. Then, select the virtual network to serve as your network hub and click **Select**.
-1. On the *Add network groups* page, select the network groups you want to add to this configuration. Then select **Add** to save.
+ :::image type="content" source="media/tutorial-create-secured-hub-and-spoke/select-hub.png" alt-text="Screenshot of Select a hub configuration.":::
+
+1. Under **Spoke network groups**, select **+ add**. Then, select your network group and click **Select**.
+
+ :::image type="content" source="media/how-to-create-hub-and-spoke/add-network-group.png" alt-text="Screenshot of Add network groups page.":::
+
+1. You'll see the following three options appear next to the network group name under **Spoke network groups**:
-1. You'll see the following three options appear next to the network group name under *Spoke network groups*:
-
:::image type="content" source="./media/how-to-create-hub-and-spoke/spokes-settings.png" alt-text="Screenshot of spoke network groups settings." lightbox="./media/how-to-create-hub-and-spoke/spokes-settings-expanded.png":::
- * *Direct connectivity*: Select **Enable peering within network group** if you want to establish VNet peering between virtual networks in the network group of the same region.
- * *Global Mesh*: Select **Enable mesh connectivity across regions** if you want to establish VNet peering for all virtual networks in the network group across regions.
- * *Gateway*: Select **Use hub as a gateway** if you have a virtual network gateway in the hub virtual network that you want this network group to use to pass traffic to on-premises.
+ | Setting | Value |
+ | - | -- |
+ | Direct connectivity | Select **Enable peering within network group** if you want to establish VNet peering between virtual networks in the network group of the same region. |
+ | Gateway | Select **Hub as a gateway** if you have a virtual network gateway in the hub virtual network that you want this network group to use to pass traffic to on-premises. This option won't be available unless a virtual network gateway is deployed in the hub virtual network. |
+ | Global Mesh | Select **Enable mesh connectivity across regions** if you want to establish VNet peering for all virtual networks in the network group across regions. This option requires you select **Enable peering within network group** first. |
Select the settings you want to enable for each network group.
-1. Finally, select **Add** to create the hub-and-spoke connectivity configuration.
+1. Finally, Select **Next: Review + create >** and then **Create** to create the hub-and-spoke connectivity configuration.
## Deploy the hub and spoke configuration
-To have this configuration take effect in your environment, you'll need to deploy the configuration to the regions where your selected virtual network are created.
+To have this configuration take effect in your environment, you'll need to deploy the configuration to the regions where your selected virtual networks are created.
+
+> [!NOTE]
+> Make sure the virtual network gateway has been successfully deployed before deploying the connectivity configuration. If you deploy a hub and spoke configuration with **Use the hub as a gateway** enabled and there's no gateway, the deployment will fail. For more information, see [use hub as a gateway](concept-connectivity-configuration.md#use-hub-as-a-gateway).
+>
+
+1. Select **Deployments** under *Settings*, then select **Deploy configuration**.
+
+ :::image type="content" source="./media/create-virtual-network-manager-portal/deployments.png" alt-text="Screenshot of deployments page in Network Manager.":::
-1. Select **Deployments** under *Settings*, then select **Deploy a configuration**.
1. On the *Deploy a configuration* select the following settings:
To have this configuration take effect in your environment, you'll need to deplo
| Setting | Value | | - | -- |
- | Configuration type | Select **Connectivity**. |
- | Configurations | Select the name of the configuration you created in the previous section. |
- | Target regions | Select all the regions that apply to virtual networks you select for the configuration. |
+ | Configurations | Select elect **Include connectivity configurations in your goal state**. This will reveal more options. |
+ | Connectivity Configurations | Select the name of the connectivity configuration you created in the previous section. |
+ | Target regions | Select all the regions that include virtual networks you need configuration applied to. |
-1. Select **Deploy** and then select **OK** to commit the configuration to the selected regions.
+1. Select **Deploy**. You'll see the deployment shows up in the list for those regions. The deployment of the configuration can take several minutes to complete. You can select the **Refresh** button to check on the status of the deployment.
-1. The deployment of the configuration can take up to 15-20 minutes, select the **Refresh** button to check on the status of the deployment.
+ :::image type="content" source="./media/how-to-create-hub-and-spoke/deploy-status.png" alt-text="Screenshot of deployment status screen." lightbox="./media/how-to-create-hub-and-spoke/deploy-status-expanded.png":::
## Confirm deployment
-1. See [view applied configuration](how-to-view-applied-configurations.md).
+1. Go to one of the virtual networks in the portal and select **Peerings** under *Settings*. You should see a new peering connection created between the hub and the spokes virtual network with *ANM* in the name.
1. To test *direct connectivity* between spokes, deploy a virtual machine into each spokes virtual network. Then initiate an ICMP request from one virtual machine to the other.
+1. See [view applied configuration](how-to-view-applied-configurations.md).
+ ## Next steps - Learn about [Security admin rules](concept-security-admins.md)
virtual-network-manager How To Create Mesh Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-mesh-network.md
Title: 'Create a mesh network topology with Azure Virtual Network Manager (Preview)' description: Learn how to create a mesh network topology with Azure Virtual Network Manager.--++ Last updated 05/02/2022
This section will help you create a network group containing the virtual network
This section will guide you through how to create a mesh configuration with the network group you created in the previous section.
-1. Select **Configuration** under *Settings*, then select **+ Create**.
+1. Select **Configurations** under *Settings*, then select **+ Create**.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of the configurations list.":::
virtual-network-manager How To View Applied Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-view-applied-configurations.md
Title: 'View configurations applied by Azure Virtual Network Manager (Preview)' description: Learn how to view configurations applied by Azure Virtual Network Manager.--++ Previously updated : 11/02/2021 Last updated : 05/04/2022
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Deploy a virtual network gateway into the hub virtual network. This virtual netw
1. Under **Spoke network groups**, select **+ add**. Then, select **myNetworkGroupB** for the network group and click **Select**.
- :::image type="content" source="media/tutorial-create-secured-hub-and-spoke/select-network-group.png" alt-text="Screenshot of Add network groups page.":::
+ :::image type="content" source="media/how-to-create-hub-and-spoke/add-network-group.png" alt-text="Screenshot of Add network groups page.":::
1. After you've added the network group, select the following options. Then select add to create the connectivity configuration.
Deploy a virtual network gateway into the hub virtual network. This virtual netw
| Hub as gateway | Select the checkbox for **Use hub as a gateway**. | | Global Mesh | Leave this option **unchecked**. Since both spokes are in the same region this setting is not required. |
-1. Select **Next: Review + create >** and then create the connectivity configuration.
+1. Select **Next: Review + create >** and then **Create** to create the connectivity configuration.
## Deploy the connectivity configuration
Make sure the virtual network gateway has been successfully deployed before depl
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deploy-configuration.png" alt-text="Screenshot of deploy a configuration page.":::
-1. Select **Deploy**. You should now see the deployment show up in the list for those regions. The deployment of the configuration can take several minutes to complete.
+1. Select **Deploy**. You should now see the deployment show up in the list for those regions. The deployment of the configuration can take several minutes to complete. You can select the **Refresh** button to check on the status of the deployment.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deployment-in-progress.png" alt-text="Screenshot of deployment in progress in deployment list."::: ## Create security configuration
-1. Select **Configuration** under *Settings* again, then select **+ Create**, and select **SecurityAdmin** from the menu to begin creating a SecurityAdmin configuration..
+1. Select **Configuration** under *Settings* again, then select **+ Create**, and select **SecurityAdmin** from the menu to begin creating a SecurityAdmin configuration.
1. Enter the name **mySecurityConfig** for the configuration, then select **Next: Rule collections**.
Make sure the virtual network gateway has been successfully deployed before depl
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/deploy-security.png" alt-text="Screenshot of deploying a security configuration.":::
-1. Select **Next** and then **Deploy**.You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take about 15-20 minutes to complete.
+1. Select **Next** and then **Deploy**. You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take about 15-20 minutes to complete.
## Verify deployment of configurations
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
na Previously updated : 04/14/2021 Last updated : 05/03/2022 # Virtual network traffic routing
-Learn about how Azure routes traffic between Azure, on-premises, and Internet resources. Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table. To learn more about virtual networks and subnets, see [Virtual network overview](virtual-networks-overview.md). You can override some of Azure's system routes with [custom routes](#custom-routes), and add additional custom routes to route tables. Azure routes outbound traffic from a subnet based on the routes in a subnet's route table.
+Learn about how Azure routes traffic between Azure, on-premises, and Internet resources. Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table. To learn more about virtual networks and subnets, see [Virtual network overview](virtual-networks-overview.md). You can override some of Azure's system routes with [custom routes](#custom-routes), and add more custom routes to route tables. Azure routes outbound traffic from a subnet based on the routes in a subnet's route table.
## System routes
-Azure automatically creates system routes and assigns the routes to each subnet in a virtual network. You can't create system routes, nor can you remove system routes, but you can override some system routes with [custom routes](#custom-routes). Azure creates default system routes for each subnet, and adds additional [optional default routes](#optional-default-routes) to specific subnets, or every subnet, when you use specific Azure capabilities.
+Azure automatically creates system routes and assigns the routes to each subnet in a virtual network. You can't create system routes, nor can you remove system routes, but you can override some system routes with [custom routes](#custom-routes). Azure creates default system routes for each subnet, and adds more [optional default routes](#optional-default-routes) to specific subnets, or every subnet, when you use specific Azure capabilities.
### Default
Each route contains an address prefix and next hop type. When traffic leaving a
The next hop types listed in the previous table represent how Azure routes traffic destined for the address prefix listed. Explanations for the next hop types follow:
-* **Virtual network**: Routes traffic between address ranges within the [address space](manage-virtual-network.md#add-or-remove-an-address-range) of a virtual network. Azure creates a route with an address prefix that corresponds to each address range defined within the address space of a virtual network. If the virtual network address space has multiple address ranges defined, Azure creates an individual route for each address range. Azure automatically routes traffic between subnets using the routes created for each address range. You don't need to define gateways for Azure to route traffic between subnets. Though a virtual network contains subnets, and each subnet has a defined address range, Azure does *not* create default routes for subnet address ranges, because each subnet address range is within an address range of the address space of a virtual network.<br>
-* **Internet**: Routes traffic specified by the address prefix to the Internet. The system default route specifies the 0.0.0.0/0 address prefix. If you don't override Azure's default routes, Azure routes traffic for any address not specified by an address range within a virtual network, to the Internet, with one exception. If the destination address is for one of Azure's services, Azure routes the traffic directly to the service over Azure's backbone network, rather than routing the traffic to the Internet. Traffic between Azure services does not traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an instance of the Azure service is deployed in. You can override Azure's default system route for the 0.0.0.0/0 address prefix with a [custom route](#custom-routes).<br>
-* **None**: Traffic routed to the **None** next hop type is dropped, rather than routed outside the subnet. Azure automatically creates default routes for the following address prefixes:<br>
+* **Virtual network**: Routes traffic between address ranges within the [address space](manage-virtual-network.md#add-or-remove-an-address-range) of a virtual network. Azure creates a route with an address prefix that corresponds to each address range defined within the address space of a virtual network. If the virtual network address space has multiple address ranges defined, Azure creates an individual route for each address range. Azure automatically routes traffic between subnets using the routes created for each address range. You don't need to define gateways for Azure to route traffic between subnets. Though a virtual network contains subnets, and each subnet has a defined address range, Azure doesn't* create default routes for subnet address ranges, because each subnet address range is within an address range of the address space of a virtual network.
+* **Internet**: Routes traffic specified by the address prefix to the Internet. The system default route specifies the 0.0.0.0/0 address prefix. If you don't override Azure's default routes, Azure routes traffic for any address not specified by an address range within a virtual network, to the Internet, with one exception. If the destination address is for one of Azure's services, Azure routes the traffic directly to the service over Azure's backbone network, rather than routing the traffic to the Internet. Traffic between Azure services doesn't traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an instance of the Azure service is deployed in. You can override Azure's default system route for the 0.0.0.0/0 address prefix with a [custom route](#custom-routes).
+* **None**: Traffic routed to the **None** next hop type is dropped, rather than routed outside the subnet. Azure automatically creates default routes for the following address prefixes:
- * **10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16**: Reserved for private use in RFC 1918.<br>
+ * **10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16**: Reserved for private use in RFC 1918.
* **100.64.0.0/10**: Reserved in RFC 6598. If you assign any of the previous address ranges within the address space of a virtual network, Azure automatically changes the next hop type for the route from **None** to **Virtual network**. If you assign an address range to the address space of a virtual network that includes, but isn't the same as, one of the four reserved address prefixes, Azure removes the route for the prefix and adds a route for the address prefix you added, with **Virtual network** as the next hop type. ### Optional default routes
-Azure adds additional default system routes for different Azure capabilities, but only if you enable the capabilities. Depending on the capability, Azure adds optional default routes to either specific subnets within the virtual network, or to all subnets within a virtual network. The additional system routes and next hop types that Azure may add when you enable different capabilities are:
+Azure adds more default system routes for different Azure capabilities, but only if you enable the capabilities. Depending on the capability, Azure adds optional default routes to either specific subnets within the virtual network, or to all subnets within a virtual network. The other system routes and next hop types that Azure may add when you enable different capabilities are:
|Source |Address prefixes |Next hop type|Subnet within virtual network that route is added to| |-- |- | |--|
Azure adds additional default system routes for different Azure capabilities, bu
|Virtual network gateway|Prefixes advertised from on-premises via BGP, or configured in the local network gateway |Virtual network gateway |All| |Default |Multiple |VirtualNetworkServiceEndpoint|Only the subnet a service endpoint is enabled for.|
-* **Virtual network (VNet) peering**: When you create a virtual network peering between two virtual networks, a route is added for each address range within the address space of each virtual network a peering is created for. Learn more about [virtual network peering](virtual-network-peering-overview.md).<br>
-* **Virtual network gateway**: One or more routes with *Virtual network gateway* listed as the next hop type are added when a virtual network gateway is added to a virtual network. The source is also *virtual network gateway*, because the gateway adds the routes to the subnet. If your on-premises network gateway exchanges border gateway protocol ([BGP](#border-gateway-protocol)) routes with an Azure virtual network gateway, a route is added for each route propagated from the on-premises network gateway. It's recommended that you summarize on-premises routes to the largest address ranges possible, so the fewest number of routes are propagated to an Azure virtual network gateway. There are limits to the number of routes you can propagate to an Azure virtual network gateway. For details, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits).<br>
-* **VirtualNetworkServiceEndpoint**: The public IP addresses for certain services are added to the route table by Azure when you enable a service endpoint to the service. Service endpoints are enabled for individual subnets within a virtual network, so the route is only added to the route table of a subnet a service endpoint is enabled for. The public IP addresses of Azure services change periodically. Azure manages the addresses in the route table automatically when the addresses change. Learn more about [virtual network service endpoints](virtual-network-service-endpoints-overview.md), and the services you can create service endpoints for.<br>
+* **Virtual network (VNet) peering**: When you create a virtual network peering between two virtual networks, a route is added for each address range within the address space of each virtual network a peering is created for. Learn more about [virtual network peering](virtual-network-peering-overview.md).
+* **Virtual network gateway**: One or more routes with *Virtual network gateway* listed as the next hop type are added when a virtual network gateway is added to a virtual network. The source is also *virtual network gateway*, because the gateway adds the routes to the subnet. If your on-premises network gateway exchanges border gateway protocol ([BGP](#border-gateway-protocol)) routes with an Azure virtual network gateway, a route is added for each route propagated from the on-premises network gateway. It's recommended that you summarize on-premises routes to the largest address ranges possible, so the fewest number of routes are propagated to an Azure virtual network gateway. There are limits to the number of routes you can propagate to an Azure virtual network gateway. For details, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits).
+* **VirtualNetworkServiceEndpoint**: The public IP addresses for certain services are added to the route table by Azure when you enable a service endpoint to the service. Service endpoints are enabled for individual subnets within a virtual network, so the route is only added to the route table of a subnet a service endpoint is enabled for. The public IP addresses of Azure services change periodically. Azure manages the addresses in the route table automatically when the addresses change. Learn more about [virtual network service endpoints](virtual-network-service-endpoints-overview.md), and the services you can create service endpoints for.
> [!NOTE]
- > The **VNet peering** and **VirtualNetworkServiceEndpoint** next hop types are only added to route tables of subnets within virtual networks created through the Azure Resource Manager deployment model. The next hop types are not added to route tables that are associated to virtual network subnets created through the classic deployment model. Learn more about Azure [deployment models](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+ > The **VNet peering** and **VirtualNetworkServiceEndpoint** next hop types are only added to route tables of subnets within virtual networks created through the Azure Resource Manager deployment model. The next hop types aren't added to route tables that are associated to virtual network subnets created through the classic deployment model. Learn more about Azure [deployment models](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
## Custom routes
You create custom routes by either creating [user-defined](#user-defined) routes
### User-defined
-You can create custom, or user-defined(static), routes in Azure to override Azure's default system routes, or to add additional routes to a subnet's route table. In Azure, you create a route table, then associate the route table to zero or more virtual network subnets. Each subnet can have zero or one route table associated to it. To learn about the maximum number of routes you can add to a route table and the maximum number of user-defined route tables you can create per Azure subscription, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits). If you create a route table and associate it to a subnet, the routes within it are combined with, or override, the default routes Azure adds to a subnet by default.
+You can create custom, or user-defined(static), routes in Azure to override Azure's default system routes, or to add more routes to a subnet's route table. In Azure, you create a route table, then associate the route table to zero or more virtual network subnets. Each subnet can have zero or one route table associated to it. To learn about the maximum number of routes you can add to a route table and the maximum number of user-defined route tables you can create per Azure subscription, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits). When you create a route table and associate it to a subnet, the table's routes are combined with the subnet's default routes. If there are conflicting route assignments, user-defined routes will override the default routes.
You can specify the following next hop types when creating a user-defined route:
-* **Virtual appliance**: A virtual appliance is a virtual machine that typically runs a network application, such as a firewall. To learn about a variety of pre-configured network virtual appliances you can deploy in a virtual network, see the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances). When you create a route with the **virtual appliance** hop type, you also specify a next hop IP address. The IP address can be:
+* **Virtual appliance**: A virtual appliance is a virtual machine that typically runs a network application, such as a firewall. To learn about various pre-configured network virtual appliances you can deploy in a virtual network, see the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances). When you create a route with the **virtual appliance** hop type, you also specify a next hop IP address. The IP address can be:
- * The [private IP address](./ip-services/private-ip-addresses.md) of a network interface attached to a virtual machine. Any network interface attached to a virtual machine that forwards network traffic to an address other than its own must have the Azure *Enable IP forwarding* option enabled for it. The setting disables Azure's check of the source and destination for a network interface. Learn more about how to [enable IP forwarding for a network interface](virtual-network-network-interface.md#enable-or-disable-ip-forwarding). Though *Enable IP forwarding* is an Azure setting, you may also need to enable IP forwarding within the virtual machine's operating system for the appliance to forward traffic between private IP addresses assigned to Azure network interfaces. If the appliance must route traffic to a public IP address, it must either proxy the traffic, or network address translate the private IP address of the source's private IP address to its own private IP address, which Azure then network address translates to a public IP address, before sending the traffic to the Internet. To determine required settings within the virtual machine, see the documentation for your operating system or network application. To understand outbound connections in Azure, see [Understanding outbound connections](../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json).<br>
+ * The [private IP address](./ip-services/private-ip-addresses.md) of a network interface attached to a virtual machine. Any network interface attached to a virtual machine that forwards network traffic to an address other than its own must have the Azure *Enable IP forwarding* option enabled for it. The setting disables Azure's check of the source and destination for a network interface. Learn more about how to [enable IP forwarding for a network interface](virtual-network-network-interface.md#enable-or-disable-ip-forwarding). Though *Enable IP forwarding* is an Azure setting, you may also need to enable IP forwarding within the virtual machine's operating system for the appliance to forward traffic between private IP addresses assigned to Azure network interfaces. If the appliance must route traffic to a public IP address, it must either proxy the traffic, or network address translate the private IP address of the source's private IP address to its own private IP address, which Azure then network address translates to a public IP address, before sending the traffic to the Internet. To determine required settings within the virtual machine, see the documentation for your operating system or network application. To understand outbound connections in Azure, see [Understanding outbound connections](../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
> [!NOTE]
- > Deploy a virtual appliance into a different subnet than the resources that route through the virtual appliance are deployed in. Deploying the virtual appliance to the same subnet, then applying a route table to the subnet that routes traffic through the virtual appliance, can result in routing loops, where traffic never leaves the subnet.
+ > Deploy a virtual appliance into a different subnet then the resources that route through the virtual appliance are deployed in. Deploying the virtual appliance to the same subnet, then applying a route table to the subnet that routes traffic through the virtual appliance, can result in routing loops, where traffic never leaves the subnet.
> > A next hop private IP address must have direct connectivity without having to route through ExpressRoute Gateway or Virtual WAN. Setting the next hop to an IP address without direct connectivity results in an invalid user-defined routing configuration.
You can specify the following next hop types when creating a user-defined route:
You can define a route with 0.0.0.0/0 as the address prefix and a next hop type of virtual appliance, enabling the appliance to inspect the traffic and determine whether to forward or drop the traffic. If you intend to create a user-defined route that contains the 0.0.0.0/0 address prefix, read [0.0.0.0/0 address prefix](#default-route) first.
-* **Virtual network gateway**: Specify when you want traffic destined for specific address prefixes routed to a virtual network gateway. The virtual network gateway must be created with type **VPN**. You cannot specify a virtual network gateway created as type **ExpressRoute** in a user-defined route because with ExpressRoute, you must use BGP for custom routes. You cannot specify Virtual Network Gateways if you have VPN and ExpressRoute coexisting connections either. You can define a route that directs traffic destined for the 0.0.0.0/0 address prefix to a [route-based](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#vpntype) virtual network gateway. On your premises, you might have a device that inspects the traffic and determines whether to forward or drop the traffic. If you intend to create a user-defined route for the 0.0.0.0/0 address prefix, read [0.0.0.0/0 address prefix](#default-route) first. Instead of configuring a user-defined route for the 0.0.0.0/0 address prefix, you can advertise a route with the 0.0.0.0/0 prefix via BGP, if you've [enabled BGP for a VPN virtual network gateway](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json).<br>
-* **None**: Specify when you want to drop traffic to an address prefix, rather than forwarding the traffic to a destination. If you haven't fully configured a capability, Azure may list *None* for some of the optional system routes. For example, if you see *None* listed as the **Next hop IP address** with a **Next hop type** of *Virtual network gateway* or *Virtual appliance*, it may be because the device isn't running, or isn't fully configured. Azure creates system [default routes](#default) for reserved address prefixes with **None** as the next hop type.<br>
-* **Virtual network**: Specify when you want to override the default routing within a virtual network. See [Routing example](#routing-example), for an example of why you might create a route with the **Virtual network** hop type.<br>
+* **Virtual network gateway**: Specify when you want traffic destined for specific address prefixes routed to a virtual network gateway. The virtual network gateway must be created with type **VPN**. You can't specify a virtual network gateway created as type **ExpressRoute** in a user-defined route because with ExpressRoute, you must use BGP for custom routes. You can't specify Virtual Network Gateways if you have VPN and ExpressRoute coexisting connections either. You can define a route that directs traffic destined for the 0.0.0.0/0 address prefix to a [route-based](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#vpntype) virtual network gateway. On your premises, you might have a device that inspects the traffic and determines whether to forward or drop the traffic. If you intend to create a user-defined route for the 0.0.0.0/0 address prefix, read [0.0.0.0/0 address prefix](#default-route) first. Instead of configuring a user-defined route for the 0.0.0.0/0 address prefix, you can advertise a route with the 0.0.0.0/0 prefix via BGP, if you've [enabled BGP for a VPN virtual network gateway](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+* **None**: Specify when you want to drop traffic to an address prefix, rather than forwarding the traffic to a destination. If you haven't fully configured a capability, Azure may list *None* for some of the optional system routes. For example, if you see *None* listed as the **Next hop IP address** with a **Next hop type** of *Virtual network gateway* or *Virtual appliance*, it may be because the device isn't running, or isn't fully configured. Azure creates system [default routes](#default) for reserved address prefixes with **None** as the next hop type.
+* **Virtual network**: Specify when you want to override the default routing within a virtual network. See [Routing example](#routing-example), for an example of why you might create a route with the **Virtual network** hop type.
* **Internet**: Specify when you want to explicitly route traffic destined to an address prefix to the Internet, or if you want traffic destined for Azure services with public IP addresses kept within the Azure backbone network.
-You cannot specify **VNet peering** or **VirtualNetworkServiceEndpoint** as the next hop type in user-defined routes. Routes with the **VNet peering** or **VirtualNetworkServiceEndpoint** next hop types are only created by Azure, when you configure a virtual network peering, or a service endpoint.
+You can't specify **VNet peering** or **VirtualNetworkServiceEndpoint** as the next hop type in user-defined routes. Routes with the **VNet peering** or **VirtualNetworkServiceEndpoint** next hop types are only created by Azure, when you configure a virtual network peering, or a service endpoint.
### Service Tags for user-defined routes
-You can now specify a [service tag](service-tags-overview.md) as the address prefix for a user-defined route instead of an explicit IP range. A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to user-defined routes and reducing the number of routes you need to create. You can currently create 25 or less routes with service tags in each route table. With this release, using service tags in routing scenarios for containers is also supported. </br>
+You can now specify a [service tag](service-tags-overview.md) as the address prefix for a user-defined route instead of an explicit IP range. A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change. Thus minimizing the complexity of frequent updates to user-defined routes and reducing the number of routes you need to create. You can currently create 25 or less routes with service tags in each route table. With this release, using service tags in routing scenarios for containers is also supported. </br>
#### Exact Match
-When there is an exact prefix match between a route with an explicit IP prefix and a route with a Service Tag, preference is given to the route with the explicit prefix. When multiple routes with Service Tags have matching IP prefixes, routes will be evaluated in the following order:
+When there's an exact prefix match between a route with an explicit IP prefix and a route with a Service Tag, preference is given to the route with the explicit prefix. When multiple routes with Service Tags have matching IP prefixes, routes will be evaluated in the following order:
- 1. Regional tags (eg. Storage.EastUS, AppService.AustraliaCentral)
- 2. Top level tags (eg. Storage, AppService)
- 3. AzureCloud regional tags (eg. AzureCloud.canadacentral, AzureCloud.eastasia)
+ 1. Regional tags (for example, Storage.EastUS, AppService.AustraliaCentral)
+ 2. Top level tags (for example, Storage, AppService)
+ 3. AzureCloud regional tags (for example, AzureCloud.canadacentral, AzureCloud.eastasia)
4. The AzureCloud tag </br></br>
-To use this feature specify a Service Tag name for the address prefix parameter in route table commands. For example, in PowerShell you can create a new route to direct traffic sent to an Azure Storage IP prefix to a virtual appliance by using: </br></br>
+To use this feature, specify a Service Tag name for the address prefix parameter in route table commands. For example, in PowerShell you can create a new route to direct traffic sent to an Azure Storage IP prefix to a virtual appliance by using: </br></br>
```azurepowershell-interactive New-AzRouteConfig -Name "StorageRoute" -AddressPrefix "Storage" -NextHopType "VirtualAppliance" -NextHopIpAddress "10.0.100.4" ```
-The same command for CLI will be: </br>
+The same command for CLI will be:
```azurecli-interactive az network route-table route create -g MyResourceGroup --route-table-name MyRouteTable -n StorageRoute --address-prefix Storage --next-hop-type VirtualAppliance --next-hop-ip-address 10.0.100.4 ```
-</br>
-#### Known Issues (April 2021)
-
-When BGP routes are present or a Service Endpoint is configured on your subnet, routes may not be evaluated with the correct priority. This feature does not currently work for dual stack (IPv4+IPv6) virtual networks. A fix for these scenarios is currently in progress </br>
+#### Known Issues (April 2021)
-> [!NOTE]
-> While in Public Preview, there are several limitations. The feature is not currently supported in the Azure Portal and is only available through PowerShell and CLI. There is no support for use with containers.
+When BGP routes are present or a Service Endpoint is configured on your subnet, routes may not be evaluated with the correct priority. This feature doesn't currently work for dual stack (IPv4+IPv6) virtual networks. A fix for these scenarios is currently in progress </br>
## Next hop types across Azure tools
The name displayed and referenced for next hop types is different between the Az
|Next hop type |Azure CLI and PowerShell (Resource Manager) |Azure classic CLI and PowerShell (classic)| |- | |--| |Virtual network gateway |VirtualNetworkGateway |VPNGateway|
-|Virtual network |VNetLocal |VNETLocal (not available in the classic CLI in asm mode)|
-|Internet |Internet |Internet (not available in the classic CLI in asm mode)|
+|Virtual network |VNetLocal |VNETLocal (not available in the classic CLI in Service Management mode)|
+|Internet |Internet |Internet (not available in the classic CLI in Service Management mode)|
|Virtual appliance |VirtualAppliance |VirtualAppliance|
-|None |None |Null (not available in the classic CLI in asm mode)|
+|None |None |Null (not available in the classic CLI in Service Management mode)|
|Virtual network peering |VNet peering |Not applicable| |Virtual network service endpoint|VirtualNetworkServiceEndpoint |Not applicable|
The name displayed and referenced for next hop types is different between the Az
An on-premises network gateway can exchange routes with an Azure virtual network gateway using the border gateway protocol (BGP). Using BGP with an Azure virtual network gateway is dependent on the type you selected when you created the gateway. If the type you selected were:
-* **ExpressRoute**: You must use BGP to advertise on-premises routes to the Microsoft Edge router. You cannot create user-defined routes to force traffic to the ExpressRoute virtual network gateway if you deploy a virtual network gateway deployed as type: ExpressRoute. You can use user-defined routes for forcing traffic from the Express Route to, for example, a Network Virtual Appliance.<br>
+* **ExpressRoute**: You must use BGP to advertise on-premises routes to the Microsoft Edge router. You can't create user-defined routes to force traffic to the ExpressRoute virtual network gateway if you deploy a virtual network gateway deployed as type: ExpressRoute. You can use user-defined routes for forcing traffic from the Express Route to, for example, a Network Virtual Appliance.
* **VPN**: You can, optionally use BGP. For details, see [BGP with site-to-site VPN connections](../vpn-gateway/vpn-gateway-bgp-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). When you exchange routes with Azure using BGP, a separate route is added to the route table of all subnets in a virtual network for each advertised prefix. The route is added with *Virtual network gateway* listed as the source and next hop type.
-ER and VPN Gateway route propagation can be disabled on a subnet using a property on a route table. When you do so, routes are not added to the route table of all subnets with Virtual network gateway route propagation disabled (both static routes and BGP routes). Connectivity with VPN connections is achieved using [custom routes](#custom-routes) with a next hop type of *Virtual network gateway*. **Route propagation should not be disabled on the GatewaySubnet. The gateway will not function with this setting disabled.** For details, see [How to disable Virtual network gateway route propagation](manage-route-table.md#create-a-route-table).
+ER and VPN Gateway route propagation can be disabled on a subnet using a property on a route table. When route propagation is disabled, routes aren't added to the route table of all subnets with Virtual network gateway route propagation disabled (both static routes and BGP routes). Connectivity with VPN connections is achieved using [custom routes](#custom-routes) with a next hop type of *Virtual network gateway*. **Route propagation shouldn't be disabled on the GatewaySubnet. The gateway will not function with this setting disabled.** For details, see [How to disable Virtual network gateway route propagation](manage-route-table.md#create-a-route-table).
## How Azure selects a route
See [Routing example](#routing-example) for a comprehensive routing table with e
## <a name="default-route"></a>0.0.0.0/0 address prefix
-A route with the 0.0.0.0/0 address prefix instructs Azure how to route traffic destined for an IP address that is not within the address prefix of any other route in a subnet's route table. When a subnet is created, Azure creates a [default](#default) route to the 0.0.0.0/0 address prefix, with the **Internet** next hop type. If you don't override this route, Azure routes all traffic destined to IP addresses not included in the address prefix of any other route, to the Internet. The exception is that traffic to the public IP addresses of Azure services remains on the Azure backbone network, and is not routed to the Internet. If you override this route, with a [custom](#custom-routes) route, traffic destined to addresses not within the address prefixes of any other route in the route table is sent to a network virtual appliance or virtual network gateway, depending on which you specify in a custom route.
+A route with the 0.0.0.0/0 address prefix instructs Azure how to route traffic destined for an IP address that isn't within the address prefix of any other route in a subnet's route table. When a subnet is created, Azure creates a [default](#default) route to the 0.0.0.0/0 address prefix, with the **Internet** next hop type. If you don't override this route, Azure routes all traffic destined to IP addresses not included in the address prefix of any other route, to the Internet. The exception is that traffic to the public IP addresses of Azure services remains on the Azure backbone network, and isn't routed to the Internet. If you override this route, with a [custom](#custom-routes) route, traffic destined to addresses not within the address prefixes of any other route in the route table is sent to a network virtual appliance or virtual network gateway, depending on which you specify in a custom route.
When you override the 0.0.0.0/0 address prefix, in addition to outbound traffic from the subnet flowing through the virtual network gateway or virtual appliance, the following changes occur with Azure's default routing:
-* Azure sends all traffic to the next hop type specified in the route, including traffic destined for public IP addresses of Azure services. When the next hop type for the route with the 0.0.0.0/0 address prefix is **Internet**, traffic from the subnet destined to the public IP addresses of Azure services never leaves Azure's backbone network, regardless of the Azure region the virtual network or Azure service resource exist in. When you create a user-defined or BGP route with a **Virtual network gateway** or **Virtual appliance** next hop type however, all traffic, including traffic sent to public IP addresses of Azure services you haven't enabled [service endpoints](virtual-network-service-endpoints-overview.md) for, is sent to the next hop type specified in the route. If you've enabled a service endpoint for a service, traffic to the service is not routed to the next hop type in a route with the 0.0.0.0/0 address prefix, because address prefixes for the service are specified in the route that Azure creates when you enable the service endpoint, and the address prefixes for the service are longer than 0.0.0.0/0.
-* You are no longer able to directly access resources in the subnet from the Internet. You can indirectly access resources in the subnet from the Internet, if inbound traffic passes through the device specified by the next hop type for a route with the 0.0.0.0/0 address prefix before reaching the resource in the virtual network. If the route contains the following values for next hop type:<br>
+* Azure sends all traffic to the next hop type specified in the route, including traffic destined for public IP addresses of Azure services. When the next hop type for the route with the 0.0.0.0/0 address prefix is **Internet**, traffic from the subnet destined to the public IP addresses of Azure services never leaves Azure's backbone network, regardless of the Azure region the virtual network or Azure service resource exist in. When you create a user-defined or BGP route with a **Virtual network gateway** or **Virtual appliance** next hop type however, all traffic, including traffic sent to public IP addresses of Azure services you haven't enabled [service endpoints](virtual-network-service-endpoints-overview.md) for, is sent to the next hop type specified in the route. If you've enabled a service endpoint for a service, traffic to the service isn't routed to the next hop type in a route with the 0.0.0.0/0 address prefix, because address prefixes for the service are specified in the route that Azure creates when you enable the service endpoint, and the address prefixes for the service are longer than 0.0.0.0/0.
+* You're no longer able to directly access resources in the subnet from the Internet. You can indirectly access resources in the subnet from the Internet, if inbound traffic passes through the device specified by the next hop type for a route with the 0.0.0.0/0 address prefix before reaching the resource in the virtual network. If the route contains the following values for next hop type:
- * **Virtual appliance**: The appliance must:<br>
+ * **Virtual appliance**: The appliance must:
- * Be accessible from the Internet<br>
- * Have a public IP address assigned to it,<br>
- * Not have a network security group rule associated to it that prevents communication to the device<br>
- * Not deny the communication<br>
+ * Be accessible from the Internet
+ * Have a public IP address assigned to it,
+ * Not have a network security group rule associated to it that prevents communication to the device
+ * Not deny the communication
* Be able to network address translate and forward, or proxy the traffic to the destination resource in the subnet, and return the traffic back to the Internet. * **Virtual network gateway**: If the gateway is an ExpressRoute virtual network gateway, an Internet-connected device on-premises can network address translate and forward, or proxy the traffic to the destination resource in the subnet, via ExpressRoute's [private peering](../expressroute/expressroute-circuit-peerings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#privatepeering).
-If your virtual network is connected to an Azure VPN gateway, do not associate a route table to the [gateway subnet](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub) that includes a route with a destination of 0.0.0.0/0. Doing so can prevent the gateway from functioning properly. For details, see the *Why are certain ports opened on my VPN gateway?* question in the [VPN Gateway FAQ](../vpn-gateway/vpn-gateway-vpn-faq.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gatewayports).
+If your virtual network is connected to an Azure VPN gateway, don't associate a route table to the [gateway subnet](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub) that includes a route with a destination of 0.0.0.0/0. Doing so can prevent the gateway from functioning properly. For details, see the *Why are certain ports opened on my VPN gateway?* question in the [VPN Gateway FAQ](../vpn-gateway/vpn-gateway-vpn-faq.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gatewayports).
See [DMZ between Azure and your on-premises datacenter](/azure/architecture/reference-architectures/dmz/secure-vnet-hybrid?toc=%2fazure%2fvirtual-network%2ftoc.json) for implementation details when using virtual network gateways between the Internet and Azure.
See [DMZ between Azure and your on-premises datacenter](/azure/architecture/refe
To illustrate the concepts in this article, the sections that follow describe:
-* A scenario, with requirements<br>
-* The custom routes necessary to meet the requirements<br>
+* A scenario, with requirements
+* The custom routes necessary to meet the requirements
* The route table that exists for one subnet that includes the default and custom routes necessary to meet the requirements > [!NOTE]
-> This example is not intended to be a recommended or best practice implementation. Rather, it is provided only to illustrate concepts in this article.
+> This example isn't intended to be a recommended or best practice implementation. Rather, it is provided only to illustrate concepts in this article.
### Requirements
To illustrate the concepts in this article, the sections that follow describe:
1. Enable an on-premises network to communicate securely with both virtual networks through a VPN tunnel over the Internet. *Alternatively, an ExpressRoute connection could be used, but in this example, a VPN connection is used.* 1. For one subnet in one virtual network:
- * Force all outbound traffic from the subnet, except to Azure Storage and within the subnet, to flow through a network virtual appliance, for inspection and logging.<br>
- * Do not inspect traffic between private IP addresses within the subnet; allow traffic to flow directly between all resources.<br>
- * Drop any outbound traffic destined for the other virtual network.<br>
+ * Force all outbound traffic from the subnet, except to Azure Storage and within the subnet, to flow through a network virtual appliance, for inspection and logging.
+ * Don't inspect traffic between private IP addresses within the subnet; allow traffic to flow directly between all resources.
+ * Drop any outbound traffic destined for the other virtual network.
* Enable outbound traffic to Azure storage to flow directly to storage, without forcing it through a network virtual appliance. 1. Allow all traffic between all other subnets and virtual networks.
The route table for *Subnet1* in the picture contains the following routes:
|11 |Default|Invalid|0.0.0.0/0 |Internet | | | |12 |User |Active |0.0.0.0/0 |Virtual appliance |10.0.100.4 |Default-NVA |
-An explanation of each route ID follows:
-
-1. Azure automatically added this route for all subnets within *Virtual-network-1*, because 10.0.0.0/16 is the only address range defined in the address space for the virtual network. If the user-defined route in route ID2 weren't created, traffic sent to any address between 10.0.0.1 and 10.0.255.254 would be routed within the virtual network, because the prefix is longer than 0.0.0.0/0, and not within the address prefixes of any of the other routes. Azure automatically changed the state from *Active* to *Invalid*, when ID2, a user-defined route, was added, since it has the same prefix as the default route, and user-defined routes override default routes. The state of this route is still *Active* for *Subnet2*, because the route table that user-defined route, ID2 is in, isn't associated to *Subnet2*.
-2. Azure added this route when a user-defined route for the 10.0.0.0/16 address prefix was associated to the *Subnet1* subnet in the *Virtual-network-1* virtual network. The user-defined route specifies 10.0.100.4 as the IP address of the virtual appliance, because the address is the private IP address assigned to the virtual appliance virtual machine. The route table this route exists in is not associated to *Subnet2*, so doesn't appear in the route table for *Subnet2*. This route overrides the default route for the 10.0.0.0/16 prefix (ID1), which automatically routed traffic addressed to 10.0.0.1 and 10.0.255.254 within the virtual network through the virtual network next hop type. This route exists to meet [requirement](#requirements) 3, to force all outbound traffic through a virtual appliance.
-3. Azure added this route when a user-defined route for the 10.0.0.0/24 address prefix was associated to the *Subnet1* subnet. Traffic destined for addresses between 10.0.0.1 and 10.0.0.254 remains within the subnet, rather than being routed to the virtual appliance specified in the previous rule (ID2), because it has a longer prefix than the ID2 route. This route was not associated to *Subnet2*, so the route does not appear in the route table for *Subnet2*. This route effectively overrides the ID2 route for traffic within *Subnet1*. This route exists to meet [requirement](#requirements) 3.
-4. Azure automatically added the routes in IDs 4 and 5 for all subnets within *Virtual-network-1*, when the virtual network was peered with *Virtual-network-2.* *Virtual-network-2* has two address ranges in its address space: 10.1.0.0/16 and 10.2.0.0/16, so Azure added a route for each range. If the user-defined routes in route IDs 6 and 7 weren't created, traffic sent to any address between 10.1.0.1-10.1.255.254 and 10.2.0.1-10.2.255.254 would be routed to the peered virtual network, because the prefix is longer than 0.0.0.0/0, and not within the address prefixes of any of the other routes. Azure automatically changed the state from *Active* to *Invalid*, when the routes in IDs 6 and 7 were added, since they have the same prefixes as the routes in IDs 4 and 5, and user-defined routes override default routes. The state of the routes in IDs 4 and 5 are still *Active* for *Subnet2*, because the route table that the user-defined routes in IDs 6 and 7 are in, isn't associated to *Subnet2*. A virtual network peering was created to meet [requirement](#requirements) 1.
-5. Same explanation as ID4.
-6. Azure added this route and the route in ID7, when user-defined routes for the 10.1.0.0/16 and 10.2.0.0/16 address prefixes were associated to the *Subnet1* subnet. Traffic destined for addresses between 10.1.0.1-10.1.255.254 and 10.2.0.1-10.2.255.254 is dropped by Azure, rather than being routed to the peered virtual network, because user-defined routes override default routes. The routes are not associated to *Subnet2*, so the routes do not appear in the route table for *Subnet2*. The routes override the ID4 and ID5 routes for traffic leaving *Subnet1*. The ID6 and ID7 routes exist to meet [requirement](#requirements) 3 to drop traffic destined to the other virtual network.
-7. Same explanation as ID6.
-8. Azure automatically added this route for all subnets within *Virtual-network-1* when a VPN type virtual network gateway was created within the virtual network. Azure added the public IP address of the virtual network gateway to the route table. Traffic sent to any address between 10.10.0.1 and 10.10.255.254 is routed to the virtual network gateway. The prefix is longer than 0.0.0.0/0 and not within the address prefixes of any of the other routes. A virtual network gateway was created to meet [requirement](#requirements) 2.
-9. Azure added this route when a user-defined route for the 10.10.0.0/16 address prefix was added to the route table associated to *Subnet1*. This route overrides ID8. The route sends all traffic destined for the on-premises network to an NVA for inspection, rather than routing traffic directly on-premises. This route was created to meet [requirement](#requirements) 3.
-10. Azure automatically added this route to the subnet when a service endpoint to an Azure service was enabled for the subnet. Azure routes traffic from the subnet to a public IP address of the service, over the Azure infrastructure network. The prefix is longer than 0.0.0.0/0 and not within the address prefixes of any of the other routes. A service endpoint was created to meet [requirement](#requirements) 3, to enable traffic destined for Azure Storage to flow directly to Azure Storage.
-11. Azure automatically added this route to the route table of all subnets within *Virtual-network-1* and *Virtual-network-2.* The 0.0.0.0/0 address prefix is the shortest prefix. Any traffic sent to addresses within a longer address prefix are routed based on other routes. By default, Azure routes all traffic destined for addresses other than the addresses specified in one of the other routes to the Internet. Azure automatically changed the state from *Active* to *Invalid* for the *Subnet1* subnet when a user-defined route for the 0.0.0.0/0 address prefix (ID12) was associated to the subnet. The state of this route is still *Active* for all other subnets within both virtual networks, because the route isn't associated to any other subnets within any other virtual networks.
-12. Azure added this route when a user-defined route for the 0.0.0.0/0 address prefix was associated to the *Subnet1* subnet. The user-defined route specifies 10.0.100.4 as the IP address of the virtual appliance. This route is not associated to *Subnet2*, so the route does not appear in the route table for *Subnet2*. All traffic for any address not included in the address prefixes of any of the other routes is sent to the virtual appliance. The addition of this route changed the state of the default route for the 0.0.0.0/0 address prefix (ID11) from *Active* to *Invalid* for *Subnet1*, because a user-defined route overrides a default route. This route exists to meet the third [requirement](#requirements).
+An explanation of each route ID follows:
+* **ID1**: Azure automatically added this route for all subnets within *Virtual-network-1*, because 10.0.0.0/16 is the only address range defined in the address space for the virtual network. If the user-defined route in route ID2 weren't created, traffic sent to any address between 10.0.0.1 and 10.0.255.254 would be routed within the virtual network, because the prefix is longer than 0.0.0.0/0, and not within the address prefixes of any of the other routes. Azure automatically changed the state from *Active* to *Invalid*, when ID2, a user-defined route, was added, since it has the same prefix as the default route, and user-defined routes override default routes. The state of this route is still *Active* for *Subnet2*, because the route table that user-defined route, ID2 is in, isn't associated to *Subnet2*.
+* **ID2**: Azure added this route when a user-defined route for the 10.0.0.0/16 address prefix was associated to the *Subnet1* subnet in the *Virtual-network-1* virtual network. The user-defined route specifies 10.0.100.4 as the IP address of the virtual appliance, because the address is the private IP address assigned to the virtual appliance virtual machine. The route table this route exists in isn't associated to *Subnet2*, so doesn't appear in the route table for *Subnet2*. This route overrides the default route for the 10.0.0.0/16 prefix (ID1), which automatically routed traffic addressed to 10.0.0.1 and 10.0.255.254 within the virtual network through the virtual network next hop type. This route exists to meet [requirement](#requirements) 3, to force all outbound traffic through a virtual appliance.
+* **ID3** Azure added this route when a user-defined route for the 10.0.0.0/24 address prefix was associated to the *Subnet1* subnet. Traffic destined for addresses between 10.0.0.1 and 10.0.0.254 remains within the subnet, rather than being routed to the virtual appliance specified in the previous rule (ID2), because it has a longer prefix than the ID2 route. This route wasn't associated to *Subnet2*, so the route doesn't appear in the route table for *Subnet2*. This route effectively overrides the ID2 route for traffic within *Subnet1*. This route exists to meet [requirement](#requirements) 3.
+* **ID4**: Azure automatically added the routes in IDs 4 and 5 for all subnets within *Virtual-network-1*, when the virtual network was peered with *Virtual-network-2.* *Virtual-network-2* has two address ranges in its address space: 10.1.0.0/16 and 10.2.0.0/16, so Azure added a route for each range. If the user-defined routes in route IDs 6 and 7 weren't created, traffic sent to any address between 10.1.0.1-10.1.255.254 and 10.2.0.1-10.2.255.254 would be routed to the peered virtual network, because the prefix is longer than 0.0.0.0/0, and not within the address prefixes of any of the other routes. Azure automatically changed the state from *Active* to *Invalid*, when the routes in IDs 6 and 7 were added, since they have the same prefixes as the routes in IDs 4 and 5, and user-defined routes override default routes. The state of the routes in IDs 4 and 5 are still *Active* for *Subnet2*, because the route table that the user-defined routes in IDs 6 and 7 are in, isn't associated to *Subnet2*. A virtual network peering was created to meet [requirement](#requirements) 1.
+* **ID5**: Same explanation as ID4.
+* **ID6**: Azure added this route and the route in ID7, when user-defined routes for the 10.1.0.0/16 and 10.2.0.0/16 address prefixes were associated to the *Subnet1* subnet. Traffic destined for addresses between 10.1.0.1-10.1.255.254 and 10.2.0.1-10.2.255.254 is dropped by Azure, rather than being routed to the peered virtual network, because user-defined routes override default routes. The routes aren't associated to *Subnet2*, so the routes don't appear in the route table for *Subnet2*. The routes override the ID4 and ID5 routes for traffic leaving *Subnet1*. The ID6 and ID7 routes exist to meet [requirement](#requirements) 3 to drop traffic destined to the other virtual network.
+* **ID7**: Same explanation as ID6.
+* **ID8**: Azure automatically added this route for all subnets within *Virtual-network-1* when a VPN type virtual network gateway was created within the virtual network. Azure added the public IP address of the virtual network gateway to the route table. Traffic sent to any address between 10.10.0.1 and 10.10.255.254 is routed to the virtual network gateway. The prefix is longer than 0.0.0.0/0 and not within the address prefixes of any of the other routes. A virtual network gateway was created to meet [requirement](#requirements) 2.
+* **ID9**: Azure added this route when a user-defined route for the 10.10.0.0/16 address prefix was added to the route table associated to *Subnet1*. This route overrides ID8. The route sends all traffic destined for the on-premises network to an NVA for inspection, rather than routing traffic directly on-premises. This route was created to meet [requirement](#requirements) 3.
+* **ID10**: Azure automatically added this route to the subnet when a service endpoint to an Azure service was enabled for the subnet. Azure routes traffic from the subnet to a public IP address of the service, over the Azure infrastructure network. The prefix is longer than 0.0.0.0/0 and not within the address prefixes of any of the other routes. A service endpoint was created to meet [requirement](#requirements) 3, to enable traffic destined for Azure Storage to flow directly to Azure Storage.
+* **ID11**: Azure automatically added this route to the route table of all subnets within *Virtual-network-1* and *Virtual-network-2.* The 0.0.0.0/0 address prefix is the shortest prefix. Any traffic sent to addresses within a longer address prefix are routed based on other routes. By default, Azure routes all traffic destined for addresses other than the addresses specified in one of the other routes to the Internet. Azure automatically changed the state from *Active* to *Invalid* for the *Subnet1* subnet when a user-defined route for the 0.0.0.0/0 address prefix (ID12) was associated to the subnet. The state of this route is still *Active* for all other subnets within both virtual networks, because the route isn't associated to any other subnets within any other virtual networks.
+* **ID12**: Azure added this route when a user-defined route for the 0.0.0.0/0 address prefix was associated to the *Subnet1* subnet. The user-defined route specifies 10.0.100.4 as the IP address of the virtual appliance. This route isn't associated to *Subnet2*, so the route doesn't appear in the route table for *Subnet2*. All traffic for any address not included in the address prefixes of any of the other routes is sent to the virtual appliance. The addition of this route changed the state of the default route for the 0.0.0.0/0 address prefix (ID11) from *Active* to *Invalid* for *Subnet1*, because a user-defined route overrides a default route. This route exists to meet the third [requirement](#requirements).
#### Subnet2
The route table for *Subnet2* contains all Azure-created default routes and the
## Next steps
-* [Create a user-defined route table with routes and a network virtual appliance](tutorial-create-route-table-portal.md)<br>
-* [Configure BGP for an Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br>
-* [Use BGP with ExpressRoute](../expressroute/expressroute-routing.md?toc=%2fazure%2fvirtual-network%2ftoc.json#route-aggregation-and-prefix-limits)<br>
-* [View all routes for a subnet](diagnose-network-routing-problem.md). A user-defined route table only shows you the user-defined routes, not the default, and BGP routes for a subnet. Viewing all routes shows you the default, BGP, and user-defined routes for the subnet a network interface is in.<br>
+* [Create a user-defined route table with routes and a network virtual appliance](tutorial-create-route-table-portal.md)
+* [Configure BGP for an Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+* [Use BGP with ExpressRoute](../expressroute/expressroute-routing.md?toc=%2fazure%2fvirtual-network%2ftoc.json#route-aggregation-and-prefix-limits)
+* [View all routes for a subnet](diagnose-network-routing-problem.md). A user-defined route table only shows you the user-defined routes, not the default, and BGP routes for a subnet. Viewing all routes shows you the default, BGP, and user-defined routes for the subnet a network interface is in.
* [Determine the next hop type](../network-watcher/diagnose-vm-network-routing-problem.md?toc=%2fazure%2fvirtual-network%2ftoc.json) between a virtual machine and a destination IP address. The Azure Network Watcher next hop feature enables you to determine whether traffic is leaving a subnet and being routed to where you think it should be.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
There are two options to add DNS servers for the P2S clients. The first method i
### For User VPN (point-to-site)- how many clients are supported?
-Each User VPN P2S gateway has two instances. Each instance supports up to a certain number of connections as the scale units change. Scale unit 1-3 supports 500 connections, scale unit 4-6 supports 1000 connections, scale unit 7-12 supports 5000 connections, and scale unit 13-18 supports up to 10,000 connections.
-
-For example, let's say the user chooses 1 scale unit. Each scale unit would imply an active-active gateway deployed and each of the instances (in this case 2) would support up to 500 connections. Since you can get 500 connections * 2 per gateway, it doesn't mean that you plan for 1000 instead of the 500 for this scale unit. Instances may need to be serviced during which connectivity for the extra 500 may be interrupted if you surpass the recommended connection count. Also, be sure to plan for downtime in case you decide to scale up or down on the scale unit, or change the point-to-site configuration on the VPN gateway.
+The table below describes the number of concurrent connections and aggregate throughput of the Point-to-site VPN Gateway supported at different scale units.
+
+Scale Unit | Gateway Instances | Supported Concurrent Connections | Aggregate Throughput|
+| - | | | |
+|1|2|500| 0.5 Gbps|
+|2|2|500| 1 Gbps|
+|3|2|500| 1.5 Gbps |
+|4|2|1000| 2 Gbps|
+|5|2|1000| 2.5 Gbps|
+|6|2|1000| 3 Gbps|
+|7|2|5000| 3.5 Gbps|
+|8|2|5000| 4 Gbps|
+|9|2|5000| 4.5 Gbps|
+|10|2|5000| 5 Gbps|
+|11|2|10000| 5.5 Gbps|
+|12|2|10000| 6 Gbps|
+|13|2|10000| 6.5 Gbps|
+|14|2|10000| 7 Gbps|
+|15|2|10000| 7.5 Gbps|
+|16|2|10000| 8 Gbps|
+|17|2|10000| 8.5 Gbps|
+|18|2|10000| 9 Gbps|
+|19|2|10000| 9.5 Gbps|
+|20|2|10000| 10 Gbps|
+|40|4|20000| 20 Gbps|
+|60|6|30000| 30 Gbps|
+|80|8|40000| 40 Gbps|
+|100|10|50000| 50 Gbps|
+|120|12|60000| 60 Gbps|
+|140|14|70000| 70 Gbps|
+|160|16|80000| 80 Gbps|
+|180|18|90000| 90 Gbps|
+|200|20|100000| 100 Gbps|
+
+For example, let's say the user chooses 1 scale unit. Each scale unit would imply an active-active gateway deployed and each of the instances (in this case 2) would support up to 500 connections. Since you can get 500 connections * 2 per gateway, it doesn't mean that you plan for 1000 instead of the 500 for this scale unit. Instances may need to be serviced during which connectivity for the extra 500 may be interrupted if you surpass the recommended connection count.
+
+For gateways with scale units greater than 20, additional highly-available pairs of gateway instances are deployed to provide additional capacity for connecting users. Each pair of instances supports up to 10,000 additional users. For example, if you deploy a Gateway with 100 scale units, 5 gateway pairs (10 total instances) are deployed, and up to 50,000 (10,000 users x 5 gateway pairs) concurrent users can connect.
+
+Also, be sure to plan for downtime in case you decide to scale up or down on the scale unit, or change the point-to-site configuration on the VPN gateway.
### What are Virtual WAN gateway scale units?
vpn-gateway About Zone Redundant Vnet Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-zone-redundant-vnet-gateways.md
For information about gateway SKUs, see [VPN gateway SKUs](vpn-gateway-about-vpn
Zone-redundant gateways and zonal gateways both rely on the Azure public IP resource *Standard* SKU. The configuration of the Azure public IP resource determines whether the gateway that you deploy is zone-redundant, or zonal. If you create a public IP resource with a *Basic* SKU, the gateway will not have any zone redundancy, and the gateway resources will be regional.
+> [!IMPORTANT]
+> *Standard* public IP resources with Tier = Global cannot be attached to a Gateway. Only *Standard* public IP resources with Tier = Regional can be used.
+ ### <a name="pipzrg"></a>Zone-redundant gateways When you create a public IP address using the **Standard** public IP SKU without specifying a zone, the behavior differs depending on whether the gateway is a VPN gateway, or an ExpressRoute gateway.