Updates from: 08/30/2022 01:09:19
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
# Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C
-In this sample tutorial, learn how to integrate [Microsoft Dynamics 365 Fraud Protection](/dynamics365/fraud-protection) (DFP) with Azure Active Directory (AD) B2C.
+In this sample tutorial, learn how to integrate [Microsoft Dynamics 365 Fraud Protection](/dynamics365/fraud-protection/ap-overview) (DFP) with Azure Active Directory (AD) B2C.
Microsoft DFP provides organizations with the capability to assess the risk of attempts to create fraudulent accounts and log-ins. Microsoft DFP assessment can be used by the customer to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts.
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
The SCIM endpoint must have an HTTP address and server authentication certificat
The .NET Core SDK includes an HTTPS development certificate that can be used during development, the certificate is installed as part of the first-run experience. Depending on how you run the ASP.NET Core Web Application it will listen to a different port:
-* Microsoft.SCIM.WebHostSample: <https://localhost:5001>
-* IIS Express: <https://localhost:44359/>
+* Microsoft.SCIM.WebHostSample: `https://localhost:5001`
+* IIS Express: `https://localhost:44359`
For more information on HTTPS in ASP.NET Core use the following link: [Enforce HTTPS in ASP.NET Core](/aspnet/core/security/enforcing-ssl)
active-directory Powershell Assign Group To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Assign User To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Display Users Group Of App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get All App Proxy Apps Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get All App Proxy Apps By Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get All App Proxy Apps Extended https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get All App Proxy Apps With Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get All Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get All Custom Domain No Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get All Custom Domains And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get All Default Domain Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get All Wildcard Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get Custom Domain Identical Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Get Custom Domain Replace Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory Powershell Move All Apps To Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md
Previously updated : 04/29/2021 Last updated : 08/29/2022
active-directory What Is Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/what-is-application-proxy.md
Previously updated : 04/27/2021 Last updated : 08/29/2022
In today's digital workplace, users work anywhere with multiple devices and apps
With Application Proxy, Azure AD keeps track of users who need to access web apps published on-premises and in the cloud. It provides a central management point for those apps. While not required, it's recommended you also enable Azure AD Conditional Access. By defining conditions for how users authenticate and gain access, you further ensure that the right people access your applications.
-**Note:** It's important to understand that Azure AD Application Proxy is intended as a VPN or reverse proxy replacement for roaming (or remote) users who need access to internal resources. It's not intended for internal users on the corporate network. Internal users who unnecessarily use Application Proxy can introduce unexpected and undesirable performance issues.
+> [!NOTE]
+> It's important to understand that Azure AD Application Proxy is intended as a VPN or reverse proxy replacement for roaming (or remote) users who need access to internal resources. It's not intended for internal users on the corporate network. Internal users who unnecessarily use Application Proxy can introduce unexpected and undesirable performance issues.
![Azure Active Directory and all your apps](media/what-is-application-proxy/azure-ad-and-all-your-apps.png)
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
c5dbd20a-8b8f-4791-a23f-488fcbde3b38 5/22/2022 11:19:17 PM False True
```
-For more information, see [New-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/new-mguserauthenticationtemporaryaccesspassmethod&preserve-view=true) and [Get-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/get-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta&preserve-view=true).
+For more information, see [New-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/new-mguserauthenticationtemporaryaccesspassmethod) and [Get-MgUserAuthenticationTemporaryAccessPassMethod](/powershell/module/microsoft.graph.identity.signins/get-mguserauthenticationtemporaryaccesspassmethod?view=graph-powershell-beta&preserve-view=true).
## Use a Temporary Access Pass
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Microsoft works with researchers, law enforcement, various security teams at Microsoft, and other trusted sources to find leaked username and password pairs. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection user risk detections](../identity-protection/concept-identity-protection-risks.md).
-There are two locations where this policy may be configured, Conditional Access and Identity Protection. Configuration using a Conditional Access policy is the preferred method providing more context including enhanced diagnostic data, report-only mode integration, Graph API support, and the ability to utilize other Conditional Access attributes in the policy.
+There are two locations where this policy may be configured, Conditional Access and Identity Protection. Configuration using a Conditional Access policy is the preferred method providing more context including enhanced diagnostic data, report-only mode integration, Graph API support, and the ability to utilize other Conditional Access attributes like sign-in frequency in the policy.
## Template deployment
Organizations can choose to deploy this policy using the steps outlined below or
1. Under **Access controls** > **Grant**. 1. Select **Grant access**, **Require password change**. 1. Select **Select**.
+1. Under **Session**.
+ 1. Select **Sign-in frequency**.
+ 1. Ensure **Every time** is selected.
+ 1. Select **Select**.
1. Confirm your settings, and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md)
-
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
-
-[Sign-in risk-based Conditional Access](howto-conditional-access-policy-risk.md)
-
-[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
-
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
-
-[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
+- [Require reauthentication every time](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time)
+- [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md)
+- [Conditional Access common policies](concept-conditional-access-policy-common.md)
+- [Sign-in risk-based Conditional Access](howto-conditional-access-policy-risk.md)
+- [Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+- [Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+- [What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Most users have a normal behavior that can be tracked, when they fall outside of
A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Organizations with Azure AD Premium P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection sign-in risk detections](../identity-protection/concept-identity-protection-risks.md#sign-in-risk).
-There are two locations where this policy may be configured, Conditional Access and Identity Protection. Configuration using a Conditional Access policy is the preferred method providing more context including enhanced diagnostic data, report-only mode integration, Graph API support, and the ability to utilize other Conditional Access attributes in the policy.
+There are two locations where this policy may be configured, Conditional Access and Identity Protection. Configuration using a Conditional Access policy is the preferred method providing more context including enhanced diagnostic data, report-only mode integration, Graph API support, and the ability to utilize other Conditional Access attributes like sign-in frequency in the policy.
-The Sign-in risk-based policy protects users from registering MFA in risky sessions. If users aren't registered for MFA, their risky sign-ins will get blocked, and they see an AADSTS53004 error.
+The Sign-in risk-based policy protects users from registering MFA in risky sessions. If users aren't registered for MFA, their risky sign-ins are blocked, and they see an AADSTS53004 error.
## Template deployment
Organizations can choose to deploy this policy using the steps outlined below or
1. Under **Access controls** > **Grant**. 1. Select **Grant access**, **Require multifactor authentication**. 1. Select **Select**.
+1. Under **Session**.
+ 1. Select **Sign-in frequency**.
+ 1. Ensure **Every time** is selected.
+ 1. Select **Select**.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
-After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
## Next steps
-[Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md)
-
-[Conditional Access common policies](concept-conditional-access-policy-common.md)
-
-[User risk-based Conditional Access](howto-conditional-access-policy-risk-user.md)
-
-[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
-
-[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
-
-[What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
+- [Require reauthentication every time](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time)
+- [Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md)
+- [Conditional Access common policies](concept-conditional-access-policy-common.md)
+- [User risk-based Conditional Access](howto-conditional-access-policy-risk-user.md)
+- [Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+- [Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
+- [What is Azure Active Directory Identity Protection?](../identity-protection/overview-identity-protection.md)
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Last updated 08/22/2022
-+
Sign-in frequency previously applied to only to the first factor authentication
### User sign-in frequency and device identities
-If you have Azure AD joined, hybrid Azure AD joined, or Azure AD registered devices, when a user unlocks their device or signs in interactively, this event will satisfy the sign-in frequency policy as well. In the following two examples user sign-in frequency is set to 1 hour:
+On Azure AD joined, hybrid Azure AD joined, or Azure AD registered devices, unlocking the device or signing in interactively will satisfy the sign-in frequency policy. In the following two examples user sign-in frequency is set to 1 hour:
Example 1:
Example 2:
- At 00:45, the user returns from their break and unlocks the device. - At 01:45, the user is prompted to sign in again based on the sign-in frequency requirement in the Conditional Access policy configured by their administrator since the last sign-in happened at 00:45.
-### Require reauthentication every time (preview)
+### Require reauthentication every time
There are scenarios where customers may want to require a fresh authentication, every time before a user performs specific actions. Sign-in frequency has a new option for **Every time** in addition to hours or days.
-The public preview supports the following scenarios:
+Supported scenarios:
- Require user reauthentication during [Intune device enrollment](/mem/intune/fundamentals/deployment-guide-enrollment), regardless of their current MFA status. - Require user reauthentication for risky users with the [require password change](concept-conditional-access-grant.md#require-password-change) grant control.
The public preview supports the following scenarios:
When administrators select **Every time**, it will require full reauthentication when the session is evaluated.
-> [!NOTE]
-> An early preview version included the option to prompt for Secondary authentication methods only at reauthentication. This option is no longer supported and should not be used.
-
-> [!WARNING]
-> Using require reauthentication every time with the sign-in risk grant control set to **No risk** isnΓÇÖt supported and will result in poor user experience.
- ## Persistence of browsing sessions A persistent browser session allows users to remain signed in after closing and reopening their browser window.
-The Azure AD default for browser session persistence allows users on personal devices to choose whether to persist the session by showing a ΓÇ£Stay signed in?ΓÇ¥ prompt after successful authentication. If browser persistence is configured in AD FS using the guidance in the article [AD FS Single Sign-On Settings](/windows-server/identity/ad-fs/operations/ad-fs-single-sign-on-settings#enable-psso-for-office-365-users-to-access-sharepoint-online), we'll comply with that policy and persist the Azure AD session as well. You can also configure whether users in your tenant see the ΓÇ£Stay signed in?ΓÇ¥ prompt by changing the appropriate setting in the company branding pane in Azure portal using the guidance in the article [Customize your Azure AD sign-in page](../fundamentals/customize-branding.md).
+The Azure AD default for browser session persistence allows users on personal devices to choose whether to persist the session by showing a ΓÇ£Stay signed in?ΓÇ¥ prompt after successful authentication. If browser persistence is configured in AD FS using the guidance in the article [AD FS single sign-on settings](/windows-server/identity/ad-fs/operations/ad-fs-single-sign-on-settings#enable-psso-for-office-365-users-to-access-sharepoint-online), we'll comply with that policy and persist the Azure AD session as well. You can also configure whether users in your tenant see the ΓÇ£Stay signed in?ΓÇ¥ prompt by changing the appropriate setting in the [company branding pane](../fundamentals/customize-branding.md).
## Configuring authentication session controls
To make sure that your policy works as expected, the recommended best practice i
1. Under **Access controls** > **Session**. 1. Select **Sign-in frequency**.
- 1. Enter the required value of days or hours in the first text box.
- 1. Select a value of **Hours** or **Days** from dropdown.
+ 1. Choose **Periodic reauthentication** and enter a value of hours or days or select **Every time**.
1. Save your policy.
-![Conditional Access policy configured for sign-in frequency](media/howto-conditional-access-session-lifetime/conditional-access-policy-session-sign-in-frequency.png)
-
-On Azure AD registered Windows devices, sign in to the device is considered a prompt. For example, if you've configured the sign-in frequency to 24 hours for Office apps, users on Azure AD registered Windows devices will satisfy the sign-in frequency policy by signing in to the device and will be not prompted again when opening Office apps.
+ > ![Conditional Access policy configured for sign-in frequency](media/howto-conditional-access-session-lifetime/conditional-access-policy-session-sign-in-frequency.png)
### Policy 2: Persistent browser session
On Azure AD registered Windows devices, sign in to the device is considered a pr
1. Under **Access controls** > **Session**. 1. Select **Persistent browser session**.
- 1. Select a value from dropdown.
-1. Save your policy.
-![Conditional Access policy configured for persistent browser](media/howto-conditional-access-session-lifetime/conditional-access-policy-session-persistent-browser.png)
+ > [!NOTE]
+ > Persistent Browser Session configuration in Azure AD Conditional Access overrides the ΓÇ£Stay signed in?ΓÇ¥ setting in the company branding pane in the Azure portal for the same user if you have configured both policies.
-> [!NOTE]
-> Persistent Browser Session configuration in Azure AD Conditional Access will overwrite the ΓÇ£Stay signed in?ΓÇ¥ setting in the company branding pane in the Azure portal for the same user if you have configured both policies.
+ 1. Select a value from dropdown.
+1. Save your policy.
### Policy 3: Sign-in frequency control every time risky user
On Azure AD registered Windows devices, sign in to the device is considered a pr
1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. 1. Under **Conditions** > **User risk**, set **Configure** to **Yes**. Under **Configure user risk levels needed for policy to be enforced** select **High**, then select **Done**. 1. Under **Access controls** > **Grant**, select **Grant access**, **Require password change**, and select **Select**.
-1. Under **Session controls** > **Sign-in frequency**, select **Every time (preview)**.
+1. Under **Session controls** > **Sign-in frequency**, select **Every time**.
1. Confirm your settings and set **Enable policy** to **Report-only**. 1. Select **Create** to create to enable your policy.
After administrators confirm your settings using [report-only mode](howto-condit
### Validation
-Use the What-If tool to simulate a sign-in from the user to the target application and other conditions based on how you configured your policy. The authentication session management controls show up in the result of the tool.
-
-![Conditional Access What If tool results](media/howto-conditional-access-session-lifetime/conditional-access-what-if-tool-result.png)
+Use the [What If tool](what-if-tool.md) to simulate a sign-in from the user to the target application and other conditions based on how you configured your policy. The authentication session management controls show up in the result of the tool.
## Prompt tolerance
-We factor for five minutes of clock skew, so that we donΓÇÖt prompt users more often than once every five minutes. If the user has done MFA in the last 5 minutes, and they hit another Conditional Access policy that requires reauthentication, we won't prompt the user. Over-promoting users for reauthentication can impact their productivity and increase the risk of users approving MFA requests they didnΓÇÖt initiate. We highly recommend using ΓÇ£Sign-in frequency ΓÇô every timeΓÇ¥ only for specific business needs.
+We factor for five minutes of clock skew, so that we donΓÇÖt prompt users more often than once every five minutes. If the user has done MFA in the last 5 minutes, and they hit another Conditional Access policy that requires reauthentication, we won't prompt the user. Over-promoting users for reauthentication can impact their productivity and increase the risk of users approving MFA requests they didnΓÇÖt initiate. Use ΓÇ£Sign-in frequency ΓÇô every timeΓÇ¥ only for specific business needs.
## Known issues-- If you configure sign-in frequency for mobile devices, authentication after each sign-in frequency interval could be slow (it can take 30 seconds on average). Also, it could happen across various apps at the same time. -- In iOS devices, if an app configures certificates as the first authentication factor and the app has both Sign-in frequency and [Intune mobile application management](/mem/intune/apps/app-lifecycle) policies applied, the end-users will be blocked from signing in to the app when the policy is triggered.+
+- If you configure sign-in frequency for mobile devices: Authentication after each sign-in frequency interval could be slow, it can take 30 seconds on average. Also, it could happen across various apps at the same time.
+- On iOS devices: If an app configures certificates as the first authentication factor and the app has both Sign-in frequency and [Intune mobile application management policies](/mem/intune/apps/app-lifecycle) applied, end-users are blocked from signing in to the app when the policy triggers.
## Next steps
active-directory V2 Oauth Ropc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth-ropc.md
Previously updated : 07/16/2021 Last updated : 08/26/2022
The Microsoft identity platform supports the [OAuth 2.0 Resource Owner Password Credentials (ROPC) grant](https://tools.ietf.org/html/rfc6749#section-4.3), which allows an application to sign in the user by directly handling their password. This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md). > [!WARNING]
-> Microsoft recommends you do _not_ use the ROPC flow. In most scenarios, more secure alternatives are available and recommended. This flow requires a very high degree of trust in the application, and carries risks which are not present in other flows. You should only use this flow when other more secure flows can't be used.
+> Microsoft recommends you do _not_ use the ROPC flow. In most scenarios, more secure alternatives are available and recommended. This flow requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows aren't viable.
> [!IMPORTANT] >
-> * The Microsoft identity platform only supports ROPC within Azure AD tenants, not personal accounts. This means that you must use a tenant-specific endpoint (`https://login.microsoftonline.com/{TenantId_or_Name}`) or the `organizations` endpoint.
-> * Personal accounts that are invited to an Azure AD tenant can't use ROPC.
-> * Accounts that don't have passwords can't sign in with ROPC, which means features like SMS sign-in, FIDO, and the Authenticator app won't work with that flow. Use a flow other than ROPC if your app or users require these features.
+> * The Microsoft identity platform only supports the ROPC grant within Azure AD tenants, not personal accounts. This means that you must use a tenant-specific endpoint (`https://login.microsoftonline.com/{TenantId_or_Name}`) or the `organizations` endpoint.
+> * Personal accounts that are invited to an Azure AD tenant can't use the ROPC flow.
+> * Accounts that don't have passwords can't sign in with ROPC, which means features like SMS sign-in, FIDO, and the Authenticator app won't work with that flow. If your app or users require these features, use a grant type other than ROPC.
> * If users need to use [multi-factor authentication (MFA)](../authentication/concept-mfa-howitworks.md) to log in to the application, they will be blocked instead.
-> * ROPC is not supported in [hybrid identity federation](../hybrid/whatis-fed.md) scenarios (for example, Azure AD and ADFS used to authenticate on-premises accounts). If users are full-page redirected to an on-premises identity providers, Azure AD is not able to test the username and password against that identity provider. [Pass-through authentication](../hybrid/how-to-connect-pta.md) is supported with ROPC, however.
-> * An exception to a hybrid identity federation scenario would be the following: Home Realm Discovery policy with AllowCloudPasswordValidation set to TRUE will enable ROPC flow to work for federated users when on-premises password is synced to cloud. For more information, see [Enable direct ROPC authentication of federated users for legacy applications](../manage-apps/home-realm-discovery-policy.md#enable-direct-ropc-authentication-of-federated-users-for-legacy-applications).
+> * ROPC is not supported in [hybrid identity federation](../hybrid/whatis-fed.md) scenarios (for example, Azure AD and ADFS used to authenticate on-premises accounts). If users are full-page redirected to an on-premises identity provider, Azure AD is not able to test the username and password against that identity provider. [Pass-through authentication](../hybrid/how-to-connect-pta.md) is supported with ROPC, however.
+> * An exception to a hybrid identity federation scenario would be the following: Home Realm Discovery policy with **AllowCloudPasswordValidation** set to TRUE will enable ROPC flow to work for federated users when an on-premises password is synced to the cloud. For more information, see [Enable direct ROPC authentication of federated users for legacy applications](../manage-apps/home-realm-discovery-policy.md#enable-direct-ropc-authentication-of-federated-users-for-legacy-applications).
> * Passwords with leading or trailing whitespaces are not supported by the ROPC flow. [!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)]
The following diagram shows the ROPC flow.
## Authorization request
-The ROPC flow is a single request: it sends the client identification and user's credentials to the IDP, and then receives tokens in return. The client must request the user's email address (UPN) and password before doing so. Immediately after a successful request, the client should securely release the user's credentials from memory. It must never save them.
+The ROPC flow is a single request; it sends the client identification and user's credentials to the identity provider, and receives tokens in return. The client must request the user's email address (UPN) and password before doing so. Immediately after a successful request, the client should securely discard the user's credentials from memory. It must never save them.
```HTTP // Line breaks and spaces are for legibility only. This is a public client, so no secret is required.
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| Parameter | Condition | Description | | | | |
-| `tenant` | Required | The directory tenant that you want to log the user into. This can be in GUID or friendly name format. This parameter can't be set to `common` or `consumers`, but may be set to `organizations`. |
+| `tenant` | Required | The directory tenant that you want to log the user into. The tenant can be in GUID or friendly name format. However, its parameter can't be set to `common` or `consumers`, but may be set to `organizations`. |
| `client_id` | Required | The Application (client) ID that the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page assigned to your app. | | `grant_type` | Required | Must be set to `password`. | | `username` | Required | The user's email address. | | `password` | Required | The user's password. | | `scope` | Recommended | A space-separated list of [scopes](v2-permissions-and-consent.md), or permissions, that the app requires. In an interactive flow, the admin or the user must consent to these scopes ahead of time. |
-| `client_secret`| Sometimes required | If your app is a public client, then the `client_secret` or `client_assertion` cannot be included. If the app is a confidential client, then it must be included.|
-| `client_assertion` | Sometimes required | A different form of `client_secret`, generated using a certificate. See [certificate credentials](active-directory-certificate-credentials.md) for more details. |
+| `client_secret`| Sometimes required | If your app is a public client, then the `client_secret` or `client_assertion` can't be included. If the app is a confidential client, then it must be included.|
+| `client_assertion` | Sometimes required | A different form of `client_secret`, generated using a certificate. For more information, see [certificate credentials](active-directory-certificate-credentials.md). |
> [!WARNING]
-> As part of not recomending this flow for use, the official SDKs do not support this flow for confidential clients, those that use a secret or assertion. You may find that the SDK you wish to use does not allow you to add a secret while using ROPC.
+> As part of not recommending this flow for use, the official SDKs do not support this flow for confidential clients, those that use a secret or assertion. You may find that the SDK you wish to use does not allow you to add a secret while using ROPC.
### Successful authentication response
If the user hasn't provided the correct username or password, or the client hasn
| Error | Description | Client action | | | -- | -|
-| `invalid_grant` | The authentication failed | The credentials were incorrect or the client doesn't have consent for the requested scopes. If the scopes aren't granted, a `consent_required` error will be returned. If this occurs, the client should send the user to an interactive prompt using a webview or browser. |
+| `invalid_grant` | The authentication failed | The credentials were incorrect or the client doesn't have consent for the requested scopes. If the scopes aren't granted, a `consent_required` error will be returned. To resolve this error, the client should send the user to an interactive prompt using a webview or browser. |
| `invalid_request` | The request was improperly constructed | The grant type isn't supported on the `/common` or `/consumers` authentication contexts. Use `/organizations` or a tenant ID instead. | ## Learn more
-For an example of using ROPC, see the [.NET Core console application](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) code sample on GitHub.
+For an example implementation of the ROPC flow, see the [.NET Core console application](https://github.com/azure-samples/active-directory-dotnetcore-console-up-v2) code sample on GitHub.
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
Previously updated : 07/27/2022 Last updated : 08/29/2022
The Microsoft identity platform stores only the first 25 signing keys when they'
## Next steps Learn more about how workload identity federation works: - How Azure AD uses the [OAuth 2.0 client credentials grant](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.-- How to create, delete, get, or update [federated identity credentials](/graph/api/resources/federatedidentitycredentials-overview?view=graph-rest-beta&preserve-view=true) on an app registration using Microsoft Graph.
+- How to create, delete, get, or update [federated identity credentials](/graph/api/resources/federatedidentitycredentials-overview) on an app registration using Microsoft Graph.
- Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources. - For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Clean Up Stale Guest Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-stale-guest-accounts.md
+
+ Title: Clean up stale guest accounts - Azure Active Directory | Microsoft Docs
+description: Clean up stale guest accounts using access reviews
++++ Last updated : 08/29/2022++++++++
+# Clean up stale guest accounts using access reviews
+
+As users collaborate with external partners, itΓÇÖs possible that many guest accounts get created in Azure Active Directory (Azure AD) tenants over time. When collaboration ends and the users no longer access your tenant, the guest accounts may become stale. Admins can use Access Reviews to automatically review inactive guest users and block them from signing in, and later, delete them from the directory.
+
+Learn more about [how to manage inactive user accounts in Azure AD](https://docs.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts).
+
+There are a few recommended patterns that are effective at cleaning up stale guest accounts:
+
+1. Create a multi-stage review whereby guests self-attest whether they still need access. A second-stage reviewer assesses results and makes a final decision. Guests with denied access are disabled and later deleted.
+
+2. Create a review to remove inactive external guests. Admins define inactive as period of days. They disable and later delete guests that donΓÇÖt sign in to the tenant within that time frame. By default, this doesn't affect recently created users. [Learn more about how to identify inactive accounts](https://docs.microsoft.com/azure/active-directory/reports-monitoring/howto-manage-inactive-user-accounts#how-to-detect-inactive-user-accounts).
+
+Use the following instructions to learn how to create Access Reviews that follow these patterns. Consider the configuration recommendations and then make the needed changes that suit your environment.
+
+## Create a multi-stage review for guests to self-attest continued access
+
+1. Create a [dynamic group](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+
+ `(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
+
+2. To [create an Access Review](https://docs.microsoft.com/azure/active-directory/governance/create-access-review)
+ for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
+
+3. Select **New access review**.
+
+4. Configure Review type.
+
+ | Property | Value |
+ |:--|:-|
+ | Select what to review | **Teams + Groups**|
+ |Review scope | **Select Teams + groups** |
+ |Group| Select the dynamic group |
+ |Scope| **Guest users only**|
+ |(Optional) Review inactive guests | Check the box for **Inactive users (on tenant level) only**.<br> Enter the number of days that constitute inactivity.|
+
+ ![Screenshot shows the review type dialog for multi-stage review for guests to self-attest continued access.](./media/clean-up-stale-guest-accounts/review-type-multi-stage-review.png)
+
+5. Select **Next: Reviews**.
+
+6. Configure Reviews:
+
+ |Property | Value |
+ |:|:-|
+ | **First stage review** | |
+ | (Preview) Multi-stage review| Check the box|
+ |Select reviewers | **Users review their own access**|
+ | Stage duration (in days) | Enter the number of days |
+ |**Second stage review** | |
+ | Select reviewers | **Group owner(s)** or **Selected user(s) or group(s)**|
+ |Stage duration (in days) | Enter the number of days.<br>(Optional) Specify a fallback reviewer.|
+ | **Specify recurrence of review** | |
+ | Review recurrence | Select your preference from the drop-down|
+ |Start date| Select a date|
+ |End| Select your preference |
+ | **Specify reviewees to go to the next stage** | |
+ | Reviewees going to the next stage | Select reviewees. For example, select users who self-approved or responded **Don't know**.
+
+ ![Screenshot shows the first stage review for multi-stage review for guests to self-attest continued access.](./media/clean-up-stale-guest-accounts/first-stage-review-for-multi-stage-review.png)
+
+7. Select **Next: Settings**.
+
+8. Configure Settings:
+
+ | Property | Value |
+ |:--|:-|
+ | **Upon completion settings**| |
+ |Auto apply results to resource| Check the box|
+ |If reviewers don't respond | **Remove access** |
+ | Action to apply on denied guest users | **Block user from signing in for 30 days, then remove user from the tenant**|
+ | (Optional) At end of review, send notification to | Specify other users or groups to notify.|
+ | **Enable reviewer decision helpers** | |
+ | Additional content for reviewer email | Add a custom message for reviewers |
+ | All other fields| Leave the default values for the remaining options. |
+
+ ![Screenshot shows the settings dialog for multi-stage review for guests to self-attest continued access.](./media/clean-up-stale-guest-accounts/settings-for-multi-stage-review.png)
+
+9. Select **Next: Review + Create**
+
+10. Enter an Access Review name. (Optional) provide description.
+
+11. Select **Create**.
+
+## Create a review to remove inactive external guests
+
+1. Create a [dynamic group](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-create-rule) for the guest users you want to review. For example,
+
+ `(user.userType -eq "Guest") and (user.mail -contains "@contoso.com") and (user.accountEnabled -eq true)`
+
+2. To [create an access review](https://docs.microsoft.com/azure/active-directory/governance/create-access-review) for the dynamic group, navigate to **Azure Active Directory > Identity Governance > Access Reviews**.
+
+3. Select **New access review**.
+
+4. Configure Review type:
+
+ |Property | Value |
+ |:|:|
+ | Select what to review | **Teams + Groups** |
+ | Review scope | **Select Teams + groups** |
+ | Group | Select the dynamic group |
+ | Scope | **Guest users only**
+ | Inactive users (on tenant level) only | Check the box |
+ | Days inactive | Enter the number of days that constitutes inactivity |
+
+ >[!NOTE]
+ >The inactivity time you configure will not affect recently created users. The Access Review will check if the user has been created in the timeframe you configure and ignore users who havenΓÇÖt existed for at least that amount of time. For example, if you set the inactivity time as 90 days and a guest user was created/invited less than 90 days ago, the guest user will not be in scope of the Access Review. This ensures that guests can sign in once before being removed.
+
+ ![Screenshot shows the review type dialog to remove inactive external guests.](./media/clean-up-stale-guest-accounts/review-type-remove-inactive-guests.png)
+
+5. Select **Next: Reviews**.
+
+6. Configure Reviews:
+
+ | Property | Value |
+ |:-|:|
+ | **Specify reviewers** | |
+ | Select reviewers | Select **Group owner(s)** or a user or group.<br>(Optional) To enable the process to remain automated, select a reviewer who will take no action.|
+ | **Specify recurrence of review**| |
+ | Duration (in days) | Enter or select a value based on your preference|
+ | Review recurrence | Select your preference from the drop-down |
+ | Start date | Select a date |
+ | End | Choose an option |
+
+7. Select **Next: Settings**.
+
+ ![Screenshot shows the Reviews dialog to remove inactive external guests.](./media/clean-up-stale-guest-accounts/reviews-remove-inactive-guests.png)
+
+8. Configure Settings:
+
+ | Property | Value |
+ | :-| :--|
+ | **Upon completion settings** | |
+ | Auto apply results to resource | Check the box |
+ | If reviews don't respond | **Remove access** |
+ | Action to apply on denied guest users | **Block user from signing in for 30 days, then remove user from the tenant** |
+ | **Enable reviewer decision helpers** | |
+ | No sign-in within 30 days | Check the box |
+ | All other fields | Check/uncheck the boxes based on your preference. |
+
+ ![Screenshot shows the Settings dialog to remove inactive external guests.](./media/clean-up-stale-guest-accounts/settings-remove-inactive-guests.png)
+
+9. Select **Next: Review + Create**.
+
+10. Enter an Access Review name. (Optional) provide description.
+
+11. Select **Create**.
+
+Guest users who don't sign into the tenant for the number of days you
+configured are disabled for 30 days, then deleted. After deletion, you
+can restore guests for up to 30 days, after which a new invitation is
+needed.
active-directory Access Reviews Application Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-application-preparation.md
description: Planning for a successful access reviews campaign for a particular
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Package Auto Assignment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md
description: Learn how to configure automatic assignments based on rules for an
documentationCenter: '' -+ editor:
This article describes how to create an access package automatic assignment poli
## Before you begin
-You'll need to have attributes populated on the users who will be in scope for being assigned access. The attributes you can use in the rules criteria of an access package assignment policy are those attributes listed in [supported properties](../enterprise-users/groups-dynamic-membership.md#supported-properties), along with [extension attributes and custom extension properties](../enterprise-users/groups-dynamic-membership.md#extension-properties-and-custom-extension-properties). These attributes can be brought into Azure AD from [Graph](/graph/api/resources/user?view=graph-rest-beta), an HR system such as [SuccessFactors](../app-provisioning/sap-successfactors-integration-reference.md), [Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md) or [Azure AD Connect sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md).
+You'll need to have attributes populated on the users who will be in scope for being assigned access. The attributes you can use in the rules criteria of an access package assignment policy are those attributes listed in [supported properties](../enterprise-users/groups-dynamic-membership.md#supported-properties), along with [extension attributes and custom extension properties](../enterprise-users/groups-dynamic-membership.md#extension-properties-and-custom-extension-properties). These attributes can be brought into Azure AD from [Graph](/graph/api/resources/user), an HR system such as [SuccessFactors](../app-provisioning/sap-successfactors-integration-reference.md), [Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md) or [Azure AD Connect sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md).
## Create an automatic assignment policy (Preview)
To create a policy for an access package, you need to start from the access pack
1. Provide a dynamic membership rule, using the [membership rule builder](../enterprise-users/groups-dynamic-membership.md) or by clicking **Edit** on the rule syntax text box. > [!NOTE]
- > The rule builder might not be able to display some rules constructed in the text box, and validating a rule currently requires the you to be in the Global administrator role. For more information, see [rule builder in the Azure portal](/enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal).
+ > The rule builder might not be able to display some rules constructed in the text box, and validating a rule currently requires the you to be in the Global administrator role. For more information, see [rule builder in the Azure portal](../enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal).
![Screenshot of an access package automatic assignment policy rule configuration.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-rule-configuration.png)
active-directory Entitlement Management Access Reviews Review Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-review-access.md
description: Learn how to complete an access review of entitlement management ac
documentationCenter: '' -+ editor:
active-directory Entitlement Management Access Reviews Self Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-self-review.md
description: Learn how to review user access of entitlement management access pa
documentationCenter: '' -+ editor:
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md
There are two places where you can see the expiration date in the Azure portal.
## Next steps -- [Create an Automation account using the Azure portal](../../automation/quickstarts/create-account-portal.md)
+- [Create an Automation account using the Azure portal](/azure/automation/quickstarts/create-azure-automation-account-portal)
active-directory What Is Identity Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-is-identity-lifecycle-management.md
Title: 'What is identity lifecycle management with Azure Active Directory? | Mic
description: Describes overview of identity lifecycle management. -+
active-directory What Is Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-is-provisioning.md
Title: 'What is provisioning with Azure Active Directory? | Microsoft Docs'
description: Describes overview of identity provisioning and the ILM scenarios. -+
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
We detect risk on workload identities across sign-in behavior and offline indica
| Unusual addition of credentials to an OAuth app | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-addition-of-credentials-to-an-oauth-app). This detection identifies the suspicious addition of privileged credentials to an OAuth app. This can indicate that an attacker has compromised the app, and is using it for malicious activity. | | Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). | | Leaked Credentials (public preview) | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. |
-| Anomalous service principal activity (public preview) | Offline | This risk detection indicates suspicious patterns of activity have been identified for an authenticated service principal. The post-authentication behavior for service principals is assessed for anomalies based on action or sequence of actions occurring for the account, along with any sign-in risk detected. |
## Identify risky workload identities
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
-# How To: Configure and enable risk policies
+# Configure and enable risk policies
As we learned in the previous article, [Identity Protection policies](concept-identity-protection-policies.md) we have two risk policies that we can enable in our directory.
There are two locations where these policies may be configured, Conditional Acce
- Enhanced diagnostic data - Report-only mode integration - Graph API support
- - Use more Conditional Access attributes in policy
+ - Use more Conditional Access attributes like sign-in frequency in the policy
Organizations can choose to deploy policies using the steps outlined below or using the [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md#conditional-access-templates-preview).
Before organizations enable remediation policies, they may want to [investigate]
1. Under **Access controls** > **Grant**. 1. Select **Grant access**, **Require password change**. 1. Select **Select**.
+1. Under **Session**.
+ 1. Select **Sign-in frequency**.
+ 1. Ensure **Every time** is selected.
+ 1. Select **Select**.
1. Confirm your settings, and set **Enable policy** to **On**. 1. Select **Create** to create to enable your policy.
Before organizations enable remediation policies, they may want to [investigate]
1. Under **Access controls** > **Grant**. 1. Select **Grant access**, **Require multi-factor authentication**. 1. Select **Select**.
+1. Under **Session**.
+ 1. Select **Sign-in frequency**.
+ 1. Ensure **Every time** is selected.
+ 1. Select **Select**.
1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create to enable your policy.
Before organizations enable remediation policies, they may want to [investigate]
- [What is risk](concept-identity-protection-risks.md) - [Investigate risk detections](howto-identity-protection-investigate-risk.md) - [Simulate risk detections](howto-identity-protection-simulate-risk.md)
+- [Require reauthentication every time](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time)
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-concept.md
Role-assignable groups are designed to help prevent potential breaches by having
- The membership type for role-assignable groups must be Assigned and can't be an Azure AD dynamic group. Automated population of dynamic groups could lead to an unwanted account being added to the group and thus assigned to the role. - By default, only Global Administrators and Privileged Role Administrators can manage the membership of a role-assignable group, but you can delegate the management of role-assignable groups by adding group owners. - RoleManagement.ReadWrite.Directory Microsoft Graph permission is required to be able to manage the membership of such groups; Group.ReadWrite.All won't work.-- To prevent elevation of privilege, only a Privileged Authentication Administrator or a Global Administrator can change the credentials or reset MFA for members and owners of a role-assignable group.
+- To prevent elevation of privilege, only a Privileged Authentication Administrator or a Global Administrator can change the credentials or reset MFA or modify sensitive attributes for members and owners of a role-assignable group.
- Group nesting is not supported. A group can't be added as a member of a role-assignable group. ## Use PIM to make a group eligible for a role assignment
active-directory My Staff Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/my-staff-configure.md
Once you have configured administrative units, you can apply this scope to your
1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) as a Global Administrator, User Administrator, or Group Administrator.
-1. Select **Azure Active Directory** > **User settings** > **User feature ** > **Manage user feature settings**.
+1. Select **Azure Active Directory** > **User settings** > **User feature** > **Manage user feature settings**.
1. Under **Administrators can access My Staff**, you can choose to enable for all users, selected users, or no user access.
active-directory Tableauonline Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableauonline-tutorial.md
Previously updated : 06/14/2022 Last updated : 07/29/2022 # Tutorial: Azure AD SSO integration with Tableau Cloud
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Setup configuration](common/setup-sso.png)
-3. If you want to setup Tableau Cloud manually, in a different web browser window, sign in to your Tableau Cloud company site as an administrator.
+3. If you want to set up Tableau Cloud manually, in a different web browser window, sign in to your Tableau Cloud company site as an administrator.
1. Go to **Settings** and then **Authentication**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
a. In the Azure portal, go on the **Tableau Cloud** application integration page.
- b. In the **User Attributes & Claims** section, click on the edit icon.
+ b. In the **User Attributes & Claims** section, click on the edit icon, perform the following steps to add SAML token attribute as shown in the below table:
![Screenshot shows the User Attributes & Claims section where you can select the edit icon.](./media/tableauonline-tutorial/attribute-section.png)
+ | Name | Source Attribute|
+ | | |
+ | DispalyName | user.displayname |
++ c. Copy the namespace value for these attributes: givenname, email and surname by using the following steps: ![Screenshot shows the Givenname, Surname, and Emailaddress attributes.](./media/tableauonline-tutorial/name.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
* Email: **mail** or **userprincipalname**
- * First name: **givenname**
-
- * Last name: **surname**
+ * Full name: **displayname**
![Screenshot shows the Match attributes section where you can enter the values.](./media/tableauonline-tutorial/claims.png)
active-directory Configure Azure Active Directory For Fedramp High Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-azure-active-directory-for-fedramp-high-impact.md
The following is a list of FedRAMP resources:
* [Federal Risk and Authorization Management Program](https://www.fedramp.gov/)
-* [FedRAMP Security Assessment Framework](https://www.fedramp.gov/assets/resources/documents/FedRAMP_Security_Assessment_Framework.pdf)
+* [FedRAMP Security Assessment Framework](https://reciprocity.com/blog/conducting-a-fedramp-risk-assessment/)
* [Agency Guide for FedRAMP Authorizations](https://www.fedramp.gov/assets/resources/documents/Agency_Authorization_Playbook.pdf)
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
The following example demonstrates a callback payload after the verifiable crede
"requestStatus": "presentation_verified", "state": "92d076dd-450a-4247-aa5b-d2e75a1a5d58", "subject": "did:ion:EiAlrenrtD3Lsw0GlbzS1O2YFdy3Xtu8yo35W<SNIP>…",
- "issuers": [
+ "verifiedCredentialsData": [
{
+ "issuer": "did:ion:issuer",
"type": [ "VerifiableCredential", "VerifiedCredentialExpert"
The following example demonstrates a callback payload after the verifiable crede
"firstName": "Megan", "lastName": "Bowen" },
- "domain": "https://contoso.com/",
- "verified": "DNS",
- "authority": "did:ion:….."
+ "credentialState": {
+ "revocationStatus": "VALID"
+ },
+ "domainValidation": {
+ "url": "https://contoso.com/"
+ }
} ], "receipt": {
- "id_token": "eyJraWQiOiJkaWQ6aW<SNIP>"
+ "id_token": "eyJraWQiOiJkaWQ6aW<SNIP>",
+ "vp_token": "...",
+ "state": "..."
}
-}ΓÇ»
+}
``` ## Next steps
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
In this step, you create the verified credential expert card by using Microsoft
], "required": false }
- ],
- "validityInterval": 2592000,
- "vc": {
- "type": [
- "VerifiedCredentialExpert"
- ]
- }
+ ]
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": [
+ "VerifiedCredentialExpert"
+ ]
} } ```
public async Task<ActionResult> issuanceRequest()
... // Here you could change the payload manifest and change the first name and last name.
- payload["issuance"]["claims"]["given_name"] = "Megan";
- payload["issuance"]["claims"]["family_name"] = "Bowen";
+ payload["claims"]["given_name"] = "Megan";
+ payload["claims"]["family_name"] = "Bowen";
... } ```
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-rbac.md
kubectl apply -f role-dev-namespace.yaml
Next, get the resource ID for the *appdev* group using the [az ad group show][az-ad-group-show] command. This group is set as the subject of a RoleBinding in the next step. ```azurecli-interactive
-az ad group show --group appdev --query objectId -o tsv
+az ad group show --group appdev --query id -o tsv
``` Now, create a RoleBinding for the *appdev* group to use the previously created Role for namespace access. Create a file named `rolebinding-dev-namespace.yaml` and paste the following YAML manifest. On the last line, replace *groupObjectId* with the group object ID output from the previous command:
kubectl apply -f role-sre-namespace.yaml
Get the resource ID for the *opssre* group using the [az ad group show][az-ad-group-show] command: ```azurecli-interactive
-az ad group show --group opssre --query objectId -o tsv
+az ad group show --group opssre --query id -o tsv
``` Create a RoleBinding for the *opssre* group to use the previously created Role for namespace access. Create a file named `rolebinding-sre-namespace.yaml` and paste the following YAML manifest. On the last line, replace *groupObjectId* with the group object ID output from the previous command:
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
The `validate-jwt` policy enforces existence and validity of a JSON web token (J
> [!IMPORTANT] > The `validate-jwt` policy requires that the `exp` registered claim is included in the JWT token, unless `require-expiration-time` attribute is specified and set to `false`.
-> The `validate-jwt` policy supports HS256 and RS256 signing algorithms. For HS256 the key must be provided inline within the policy in the base64 encoded form. For RS256 the key may be provided either via an Open ID configuration endpoint, or by providing the ID of an uploaded certificate that contains the public key or modulus-exponent pair of the public key.
+> The `validate-jwt` policy supports HS256 and RS256 signing algorithms. For HS256 the key must be provided inline within the policy in the base64 encoded form. For RS256 the key may be provided either via an Open ID configuration endpoint, or by providing the ID of an uploaded certificate that contains the public key or modulus-exponent pair of the public key but in PFX format.
> The `validate-jwt` policy supports tokens encrypted with symmetric keys using the following encryption algorithms: A128CBC-HS256, A192CBC-HS384, A256CBC-HS512. [!INCLUDE [api-management-policy-form-alert](../../includes/api-management-policy-form-alert.md)]
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
This article shows how to automate backup and restore operations of your API Man
* An API Management service instance. If you don't have one, see [Create an API Management service instance](get-started-create-service-instance.md). * An Azure storage account. If you don't have one, see [Create a storage account](../storage/common/storage-account-create.md).
- * [Create a container](/storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in the storage account to hold the backup data.
+ * [Create a container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) in the storage account to hold the backup data.
* The latest version of Azure PowerShell, if you plan to use Azure PowerShell cmdlets. If you haven't already, [install Azure PowerShell](/powershell/azure/install-az-ps).
Check out the following related resources for the backup/restore process:
[api-management-arm-token]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-arm-token.png [api-management-endpoint]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-endpoint.png [control-plane-ip-address]: virtual-network-reference.md#control-plane-ip-addresses
-[azure-storage-ip-firewall]: ../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range
+[azure-storage-ip-firewall]: ../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
To complete this quickstart, you need an Azure account with an active subscripti
> [!IMPORTANT] > - [After November 28, 2022, PHP will only be supported on App Service on Linux.](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#end-of-life-for-php-74).
-> - The MySQL Flexible Server is created behind a private [Virtual Network](/virtual-network/virtual-networks-overview) and can't be accessed directly. To access the database, use phpMyAdmin that's deployed with the WordPress site. It can be found at the URL : https://`<sitename>`.azurewebsites.net/phpmyadmin
+> - The MySQL Flexible Server is created behind a private [Virtual Network](/azure/virtual-network/virtual-networks-overview) and can't be accessed directly. To access the database, use phpMyAdmin that's deployed with the WordPress site. It can be found at the URL : https://`<sitename>`.azurewebsites.net/phpmyadmin
> > If you have feedback to improve this WordPress offering on App Service, submit your ideas at [Web Apps Community](https://feedback.azure.com/d365community/forum/b09330d1-c625-ec11-b6e6-000d3a4f0f1c).
Congratulations, you've successfully completed this quickstart!
> [Tutorial: PHP app with MySQL](tutorial-php-mysql-app.md) > [!div class="nextstepaction"]
-> [Configure PHP app](configure-language-php.md)
+> [Configure PHP app](configure-language-php.md)
azure-app-configuration Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/cli-samples.md
Previously updated : 02/19/2020 - Last updated : 08/09/2022 + # Azure CLI samples
-The following table includes links to bash scripts for Azure App Configuration by using the Azure CLI.
+The following table includes links to bash scripts for Azure App Configuration by using the [az appconfig](/cli/azure/appconfig) commands in the Azure CLI:
| Script | Description | |-|-|
azure-app-configuration Concept Feature Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-feature-management.md
description: Turn features on and off using Azure App Configuration
-+ Previously updated : 02/20/2020 Last updated : 08/17/2022 # Feature management overview
-Traditionally, shipping a new application feature requires a complete redeployment of the application itself. Testing a feature often requires multiple deployments of the application. Each deployment may change the feature or expose the feature to different customers for testing.
+Traditionally, shipping a new application feature requires a complete redeployment of the application itself. Testing a feature often requires multiple deployments of the application. Each deployment might change the feature or expose the feature to different customers for testing.
-Feature management is a modern software-development practice that decouples feature release from code deployment and enables quick changes to feature availability on demand. It uses a technique called *feature flags* (also known as *feature toggles*, *feature switches*, and so on) to dynamically administer a feature's lifecycle.
+Feature management is a modern software-development practice that decouples feature release from code deployment and enables quick changes to feature availability on demand. It uses a technique called *feature flags* (also known as *feature toggles* and *feature switches*) to dynamically administer a feature's lifecycle.
Feature management helps developers address the following problems:
Feature management helps developers address the following problems:
* **Test in production**: Use feature flags to grant early access to new functionality in production. For example, you can limit access to team members or to internal beta testers. These users will experience the full-fidelity production experience instead of a simulated or partial experience in a test environment. * **Flighting**: Use feature flags to incrementally roll out new functionality to end users. You can target a small percentage of your user population first and increase that percentage gradually over time. * **Instant kill switch**: Feature flags provide an inherent safety net for releasing new functionality. You can turn application features on and off without redeploying any code. If necessary, you can quickly disable a feature without rebuilding and redeploying your application.
-* **Selective activation**: Use feature flags to segment your users and deliver a specific set of features to each group. You may have a feature that works only on a certain web browser. You can define a feature flag so that only users of that browser can see and use the feature. With this approach, you can easily expand the supported browser list later without having to make any code changes.
+* **Selective activation**: Use feature flags to segment your users and deliver a specific set of features to each group. You might have a feature that works only on a certain web browser. You can define a feature flag so that only users of that browser can see and use the feature. By using this approach, you can easily expand the supported browser list later without having to make any code changes.
## Basic concepts
if (featureFlag) {
} ```
-You can set the value of `featureFlag` statically.
+You can set the value of `featureFlag` statically:
```csharp bool featureFlag = true;
if (featureFlag) {
## Feature flag repository
-To use feature flags effectively, you need to externalize all the feature flags used in an application. This allows you to change feature flag states without modifying and redeploying the application itself.
+To use feature flags effectively, you need to externalize all the feature flags used in an application. You can use this approach to change feature flag states without modifying and redeploying the application itself.
Azure App Configuration provides a centralized repository for feature flags. You can use it to define different kinds of feature flags and manipulate their states quickly and confidently. You can then use the App Configuration libraries for various programming language frameworks to easily access these feature flags from your application.
-[Use feature flags in an ASP.NET Core app](./use-feature-flags-dotnet-core.md) shows how the .NET Core App Configuration provider and Feature Management libraries are used together to implement feature flags for your ASP.NET web application.
+[The feature flags in an ASP.NET Core app](./use-feature-flags-dotnet-core.md) shows how the .NET Core App Configuration provider and Feature Management libraries are used together to implement feature flags for your ASP.NET web application. For more information on feature flags in Azure App Configuration, see the following articles:
+
+* [Manage feature flags](./manage-feature-flags.md)
+* [Use conditional feature flags](./howto-feature-filters-aspnet-core.md)
+* [Enable a feature for specified users/groups](./howto-targetingfilter-aspnet-core.md)
+* [Add feature flags to an ASP.NET Core app](./quickstart-feature-flag-aspnet-core.md)
+* [Add feature flags to a .NET Framework app](./quickstart-feature-flag-dotnet.md)
+* [Add feature flags to an Azure Functions app](./quickstart-feature-flag-azure-functions-csharp.md)
+* [Add feature flags to a Spring Boot app](./quickstart-feature-flag-spring-boot.md)
+* [Use feature flags in an ASP.NET Core](./use-feature-flags-dotnet-core.md)
+* [Use feature flags in a Spring Boot app](./use-feature-flags-spring-boot.md)
## Next steps
azure-app-configuration Concept Key Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-key-value.md
Previously updated : 08/04/2020 Last updated : 08/17/2022+ # Keys and values
Azure App Configuration stores configuration data as key-values. Key-values are
Keys serve as identifiers for key-values and are used to store and retrieve corresponding values. It's a common practice to organize keys into a hierarchical namespace by using a character delimiter, such as `/` or `:`. Use a convention best suited to your application. App Configuration treats keys as a whole. It doesn't parse keys to figure out how their names are structured or enforce any rule on them.
-Here is an example of key names structured into a hierarchy based on component
+Here's an example of key names structured into a hierarchy based on component
```aspx AppName:Service1:ApiEndpoint AppName:Service2:ApiEndpoint ```
-The use of configuration data within application frameworks might dictate specific naming schemes for key-values. For example, Java's Spring Cloud framework defines `Environment` resources that supply settings to a Spring application. These are parameterized by variables that include *application name* and *profile*. Keys for Spring Cloud-related configuration data typically start with these two elements separated by a delimiter.
+The use of configuration data within application frameworks might dictate specific naming schemes for key-values. For example, Java's Spring Cloud framework defines `Environment` resources that supply settings to a Spring application. These resources are parameterized by variables that include *application name* and *profile*. Keys for Spring Cloud-related configuration data typically start with these two elements separated by a delimiter.
-Keys stored in App Configuration are case-sensitive, unicode-based strings. The keys *app1* and *App1* are distinct in an App Configuration store. Keep this in mind when you use configuration settings within an application because some frameworks handle configuration keys case-insensitively. We do not recommend using case to differentiate keys.
+Keys stored in App Configuration are case-sensitive, unicode-based strings. The keys *app1* and *App1* are distinct in an App Configuration store. Keep this in mind when you use configuration settings within an application because some frameworks handle configuration keys case-insensitively. We don't recommend using case to differentiate keys.
-You can use any unicode character in key names except for `%`. A key name cannot be `.` or `..` either. There's a combined size limit of 10 KB on a key-value. This limit includes all characters in the key, its value, and all associated optional attributes. Within this limit, you can have many hierarchical levels for keys.
+You can use any unicode character in key names except for `%`. A key name can't be `.` or `..` either. There's a combined size limit of 10 KB on a key-value. This limit includes all characters in the key, its value, and all associated optional attributes. Within this limit, you can have many hierarchical levels for keys.
### Design key namespaces
-There are two general approaches to naming keys used for configuration data: flat or hierarchical. These methods are similar from an application usage standpoint, but hierarchical naming offers a number of advantages:
+Two general approaches to naming keys are used for configuration data: flat or hierarchical. These methods are similar from an application usage standpoint, but hierarchical naming offers many advantages:
* Easier to read. Delimiters in a hierarchical key name function as spaces in a sentence. They also provide natural breaks between words. * Easier to manage. A key name hierarchy represents logical groups of configuration data.
Label provides a convenient way to create variants of a key. A common use of lab
Use labels as a way to create multiple versions of a key-value. For example, you can input an application version number or a Git commit ID in labels to identify key-values associated with a particular software build. > [!NOTE]
-> If you are looking for change versions, App Configuration keeps all changes of a key-value occurred in the past certain period of time automatically. See [point-in-time snapshot](./concept-point-time-snapshot.md) for more details.
+> If you're looking for change versions, App Configuration keeps all changes of a key-value that occurred in the past certain period of time automatically. For more information, see [point-in-time snapshot](./concept-point-time-snapshot.md).
### Query key-values
Each key-value is uniquely identified by its key plus a label that can be `\0`.
| Key | Description | |||
-| `key` is omitted or `key=*` | Matches all keys |
-| `key=abc` | Matches key name **abc** exactly |
-| `key=abc*` | Matches key names that start with **abc** |
-| `key=abc,xyz` | Matches key names **abc** or **xyz**. Limited to five CSVs |
+| `key` is omitted or `key=*` | Matches all keys. |
+| `key=abc` | Matches key name **abc** exactly. |
+| `key=abc*` | Matches key names that start with **abc**.|
+| `key=abc,xyz` | Matches key names **abc** or **xyz**. Limited to five CSVs. |
You also can include the following label patterns: | Label | Description | |||
-| `label` is omitted or `label=*` | Matches any label, which includes `\0` |
-| `label=%00` | Matches `\0` label |
-| `label=1.0.0` | Matches label **1.0.0** exactly |
-| `label=1.0.*` | Matches labels that start with **1.0.** |
-| `label=%00,1.0.0` | Matches labels `\0` or **1.0.0**, limited to five CSVs |
+| `label` is omitted or `label=*` | Matches any label, which includes `\0`. |
+| `label=%00` | Matches `\0` label. |
+| `label=1.0.0` | Matches label **1.0.0** exactly. |
+| `label=1.0.*` | Matches labels that start with **1.0.**. |
+| `label=%00,1.0.0` | Matches labels `\0` or **1.0.0**, limited to five CSVs. |
> [!NOTE] > `*`, `,`, and `\` are reserved characters in queries. If a reserved character is used in your key names or labels, you must escape it by using `\{Reserved Character}` in queries.
You also can include the following label patterns:
Values assigned to keys are also unicode strings. You can use all unicode characters for values.
-### Use Content-Type
-Each key-value in App Configuration has a content-type attribute. You can optionally use this attribute to store information about the type of value in a key-value that helps your application to process it properly. You can use any format for the content-type. App Configuration uses [Media Types]( https://www.iana.org/assignments/media-types/media-types.xhtml) (also known as MIME types) for built-in data types such as feature flags, Key Vault references, and JSON key-values.
+### Use content type
+
+Each key-value in App Configuration has a content type attribute. You can optionally use this attribute to store information about the type of value in a key-value that helps your application to process it properly. You can use any format for the content type. App Configuration uses [Media Types]( https://www.iana.org/assignments/media-types/media-types.xhtml) (also known as MIME types) for built-in data types such as feature flags, Key Vault references, and JSON key-values.
## Next steps
-* [Point-in-time snapshot](./concept-point-time-snapshot.md)
-* [Feature management](./concept-feature-management.md)
-* [Event handling](./concept-app-configuration-event.md)
+> [!div class="nextstepaction"]
+> [Point-in-time snapshot](./concept-point-time-snapshot.md)
+
+> [!div class="nextstepaction"]
+> [Feature management](./concept-feature-management.md)
+
+> [!div class="nextstepaction"]
+> [Event handling](./concept-app-configuration-event.md)
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
description: Authenticate to Azure App Configuration using managed identities
-+ Previously updated : 04/08/2021 Last updated : 08/23/2022 zone_pivot_groups: appconfig-provider # Use managed identities to access App Configuration
Azure App Configuration and its .NET Core, .NET Framework, and Java Spring clien
:::zone target="docs" pivot="framework-dotnet"
-This article shows how you can take advantage of the managed identity to access App Configuration. It builds on the web app introduced in the quickstarts. Before you continue, [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) first.
+This article shows how you can take advantage of the managed identity to access App Configuration. It builds on the web app introduced in the quickstarts. Before you continue, [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) first.
:::zone-end :::zone target="docs" pivot="framework-spring"
-This article shows how you can take advantage of the managed identity to access App Configuration. It builds on the web app introduced in the quickstarts. Before you continue, [Create a Java Spring app with Azure App Configuration](./quickstart-java-spring-app.md) first.
+This article shows how you can take advantage of the managed identity to access App Configuration. It builds on the web app introduced in the quickstarts. Before you continue, [Create a Java Spring app with Azure App Configuration](./quickstart-java-spring-app.md) first.
:::zone-end > [!IMPORTANT]
-> Managed Identity cannot be used to authenticate locally-running applications. Your application must be deployed to an Azure service that supports Managed Identity. This article uses Azure App Service as an example, but the same concept applies to any other Azure service that supports managed identity, for example, [Azure Kubernetes Service](../aks/use-azure-ad-pod-identity.md), [Azure Virtual Machine](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md), and [Azure Container Instances](../container-instances/container-instances-managed-identity.md). If your workload is hosted in one of those services, you can leverage the service's managed identity support, too.
+> Managed identity can't be used to authenticate locally running applications. Your application must be deployed to an Azure service that supports Managed Identity. This article uses Azure App Service as an example. However, the same concept applies to any other Azure service that supports managed identity. For example, [Azure Kubernetes Service](../aks/use-azure-ad-pod-identity.md), [Azure Virtual Machine](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md), and [Azure Container Instances](../container-instances/container-instances-managed-identity.md). If your workload is hosted in one of those services, you can also leverage the service's managed identity support.
You can use any code editor to do the steps in this tutorial. [Visual Studio Code](https://code.visualstudio.com/) is an excellent option available on the Windows, macOS, and Linux platforms.
To complete this tutorial, you must have:
:::zone target="docs" pivot="framework-spring" -- Azure subscription - [create one for free](https://azure.microsoft.com/free/)-- A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 11.-- [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above.
+* Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+* A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 11.
+* [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above.
:::zone-end
To complete this tutorial, you must have:
To set up a managed identity in the portal, you first create an application and then enable the feature.
-1. Access your App Services resource in the [Azure portal](https://portal.azure.com). If you don't have an existing App Services resource to work with, create one.
+1. Access your App Services resource in the [Azure portal](https://portal.azure.com). If you don't have an existing App Services resource to use, create one.
1. Scroll down to the **Settings** group in the left pane, and select **Identity**. 1. On the **System assigned** tab, switch **Status** to **On** and select **Save**.
-1. Answer **Yes** when prompted to enable system assigned managed identity.
+1. When prompted, answer **Yes** to turn on the system-assigned managed identity.
- ![Set managed identity in App Service](./media/set-managed-identity-app-service.png)
+ :::image type="content" source="./media/add-managed-identity-app-service.png" alt-text="Screenshot of how to add a managed identity in App Service.":::
## Grant access to App Configuration The following steps describe how to assign the App Configuration Data Reader role to App Service. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-1. In the [Azure portal](https://portal.azure.com), select **All resources** and select the App Configuration store that you created in the quickstart.
+1. In the [Azure portal](https://portal.azure.com), select **All resources** and select the App Configuration store that you created in the [quickstart](../azure-app-configuration/quickstart-azure-functions-csharp.md).
1. Select **Access control (IAM)**. 1. Select **Add** > **Add role assignment**.
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu.png" alt-text="Screenshot that shows the Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Role** tab, select the **App Configuration Data Reader** role.
+ If you don't have permission to assign roles, then the **Add role assignment** option will be disabled. For more information, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-role-generic.png" alt-text="Screenshot showing Add role assignment page with Role tab selected.":::
+1. On the **Role** tab, select the **App Configuration Data Reader** role and then select **Next**.
-1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+ :::image type="content" source="../../includes/role-based-access-control/media/select-role-assignment-generic.png" alt-text="Screenshot that shows the Add role assignment page with Role tab selected.":::
-1. Select your Azure subscription, for Managed Identity select **App Service**, then select your App Service name.
+1. On the **Members** tab, select **Managed identity** and then select **Select members**.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/add-members.png" alt-text="Screenshot that shows the Add role assignment page with Members tab selected.":::
+
+1. Select your Azure subscription, for Managed identity select **App Service**, then select your App Service name.
+
+ :::image type="content" source="../../includes/role-based-access-control/media/select-managed-identity-members.png" alt-text="Screenshot that shows the select managed identities page.":::
1. On the **Review + assign** tab, select **Review + assign** to assign the role.
The following steps describe how to assign the App Configuration Data Reader rol
:::zone target="docs" pivot="framework-dotnet"
-1. Add a reference to the *Azure.Identity* package:
+1. Add a reference to the `Azure.Identity` package:
```bash dotnet add package Azure.Identity
The following steps describe how to assign the App Configuration Data Reader rol
1. Find the endpoint to your App Configuration store. This URL is listed on the **Access keys** tab for the store in the Azure portal.
-1. Open *appsettings.json*, and add the following script. Replace *\<service_endpoint>*, including the brackets, with the URL to your App Configuration store.
+1. Open the *appsettings.json* file and add the following script. Replace *\<service_endpoint>*, including the brackets, with the URL to your App Configuration store.
```json "AppConfig": {
The following steps describe how to assign the App Configuration Data Reader rol
} ```
-1. Open *Program.cs*, and add a reference to the `Azure.Identity` and `Microsoft.Azure.Services.AppAuthentication` namespaces:
+1. Open the *Program.cs* file and add a reference to the `Azure.Identity` and `Microsoft.Azure.Services.AppAuthentication` namespaces:
```csharp-interactive using Azure.Identity; ```
-1. If you wish to access only values stored directly in App Configuration, update the `CreateWebHostBuilder` method by replacing the `config.AddAzureAppConfiguration()` method (this is found in the `Microsoft.Azure.AppConfiguration.AspNetCore` package).
+1. If you wish to access only values stored directly in App Configuration, update the `CreateWebHostBuilder` method by replacing the `config.AddAzureAppConfiguration()` method (this method is found in the `Microsoft.Azure.AppConfiguration.AspNetCore` package).
> [!IMPORTANT]
- > `CreateHostBuilder` replaces `CreateWebHostBuilder` in .NET Core 3.0. Select the correct syntax based on your environment.
+ > `CreateHostBuilder` replaces `CreateWebHostBuilder` in .NET Core 3.0. Select the correct syntax based on your environment.
### [.NET Core 5.x](#tab/core5x)
The following steps describe how to assign the App Configuration Data Reader rol
> [!NOTE]
- > If you want to use a **user-assigned managed identity**, be sure to specify the clientId when creating the [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential).
+ > If you want to use a **user-assigned managed identity**, be sure to specify the `clientId` when creating the [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential).
>```csharp >config.AddAzureAppConfiguration(options => > { > options.Connect(new Uri(settings["AppConfig:Endpoint"]), new ManagedIdentityCredential("<your_clientId>")) > }); >```
- >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/known-issues.md), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid possible runtime issues in the future (for instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled). So, you will need to specify the clientId even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
+ >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/known-issues.md), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid possible runtime issues in the future. For instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled. So, you will need to specify the `clientId` even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
:::zone-end
spring.cloud.azure.appconfiguration.stores[0].endpoint=<service_endpoint>
``` > [!NOTE]
-> If you want to use **user-assigned managed identity** the property `spring.cloud.azure.appconfiguration.stores[0].managed-identity.client-id`, be sure to specify the clientId when creating the [ManagedIdentityCredential](/java/api/com.azure.identity.managedidentitycredential).
+> If you want to use **user-assigned managed identity** the property `spring.cloud.azure.appconfiguration.stores[0].managed-identity.client-id`, ensure that you specify the `clientId` when creating the [ManagedIdentityCredential](/java/api/com.azure.identity.managedidentitycredential).
:::zone-end
spring.cloud.azure.appconfiguration.stores[0].endpoint=<service_endpoint>
:::zone target="docs" pivot="framework-dotnet"
-Using managed identities requires you to deploy your app to an Azure service. Managed identities can't be used for authentication of locally-running apps. To deploy the .NET Core app that you created in the [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) quickstart and modified to use managed identities, follow the guidance in [Publish your web app](../app-service/quickstart-dotnetcore.md?pivots=development-environment-vs&tabs=netcore31#publish-your-web-app).
+You must deploy your app to an Azure service when you use managed identities. Managed identities can't be used for authentication of locally running apps. To deploy the .NET Core app that you created in the [Create an ASP.NET Core app with App Configuration](./quickstart-aspnet-core-app.md) quickstart and modified to use managed identities, follow the guidance in [Publish your web app](../app-service/quickstart-dotnetcore.md?pivots=development-environment-vs&tabs=netcore31#publish-your-web-app).
:::zone-end :::zone target="docs" pivot="framework-spring"
-Using managed identities requires you to deploy your app to an Azure service. Managed identities can't be used for authentication of locally-running apps. To deploy the Spring app that you created in the [Create a Java Spring app with Azure App Configuration](./quickstart-java-spring-app.md) quickstart and modified to use managed identities, follow the guidance in [Publish your web app](../app-service/quickstart-java.md?tabs=javase&pivots=platform-linux).
+Using managed identities requires you to deploy your app to an Azure service. Managed identities can't be used for authentication of locally running apps. To deploy the Spring app that you created in the [Create a Java Spring app with Azure App Configuration](./quickstart-java-spring-app.md) quickstart and modified to use managed identities, follow the guidance in [Publish your web app](../app-service/quickstart-java.md?tabs=javase&pivots=platform-linux).
:::zone-end
In addition to App Service, many other Azure services support managed identities
[!INCLUDE [azure-app-configuration-cleanup](../../includes/azure-app-configuration-cleanup.md)] ## Next steps+ In this tutorial, you added an Azure managed identity to streamline access to App Configuration and improve credential management for your app. To learn more about how to use App Configuration, continue to the Azure CLI samples. > [!div class="nextstepaction"]
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 07/26/2022 Last updated : 08/29/2022
The table below lists the URLs that must be available in order to install and us
|`dc.services.visualstudio.com`|Agent telemetry|Optional| Public | > [!NOTE]
-> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET /urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
+> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
### [Azure Government](#tab/azure-government)
azure-fluid-relay Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/faq.md
The following are frequently asked questions about Azure Fluid Relay
## When will Azure Fluid Relay be Generally Available?
-Azure Fluid Relay will be Generally Available on 8/1/2022. At that point, the service will no longer be free. Charges will apply based on your usage of Azure Fluid Relay. The service will be metering 4 activities:
--- Operations in: As end users join, leave, and contribute to a collaborative session, the Fluid Framework client libraries send messages (also referred to as operations or ops) to the service. Each message incoming from one client is counted as one message. Heartbeat messages and other session messages are also counted. Messages larger than 2KB are counted as multiple messages of 2KB each (for example, 11KB message is counted as 6 messages).-- Operations out: Once the service processes incoming messages, it broadcasts them to all participants in the collaborative session. Each message sent to each client is counted as one message (for example, in a 3-user session, one of the users sends an op, that will generate 3 ops out).-- Client connectivity minutes: The duration of each user being connected to the session will be charged on a per user basis (for example, 3 users collaborate on a session for an hour, this is charged as 180 connectivity minutes).-- Storage: Each collaborative Fluid session stores session artifacts in the service. Storage of this data will be charged on a per GB per month basis (prorated as appropriate).-
-Reference the table below for the prices (in USD) we will start to charge at General Availability for each of these meters in the regions Azure Fluid Relay is currently offered. Additional regions and additional information about other currencies will be available on our pricing page soon.
-
-| Meter | Unit | West US 2 | West Europe | Southeast Asia
-|--|--|--|--|--|
-| Operations In | 1 million ops | 1.50 | 1.95 | 1.95 |
-| Operations Out | 1 million ops | 0.50 | 0.65 | 0.65 |
-| Client Connectivity Minutes | 1 million minutes | 1.50 | 1.95 | 1.95 |
-| Storage | 1 GB/month | 0.20 | 0.26 | 0.26 |
--
+Azure Fluid Relay is Generally Available now. For a complete list of available regions, see [Azure Fluid Relay regions and availability](https://azure.microsoft.com/global-infrastructure/services/?products=fluid-relay). For our pricing list, see [Azure Fluid Relay pricing](https://azure.microsoft.com/pricing/details/fluid-relay).
## Which Azure regions currently provide Fluid Relay?
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
Title: Analyze usage in Log Analytics workspace in Azure Monitor
description: Methods and queries to analyze the data in your Log Analytics workspace to help you understand usage and potential cause for high usage. Previously updated : 03/24/2022 Last updated : 08/25/2022 # Analyze usage in Log Analytics workspace
You should start your analysis with existing tools in Azure Monitor. These requi
- Top resources contributing data - Trend of data ingestion
-See the **Usage** tab for a breakdown of ingestion by solution and table. This can help you quickly identify the tables that contribute to the bulk of your data volume. It also shows trending of data collection over time to determine if data collection steadily increase over time or suddenly increased in response to a particular configuration change.
+See the **Usage** tab for a breakdown of ingestion by solution and table. This can help you quickly identify the tables that contribute to the bulk of your data volume. It also shows trending of data collection over time to determine if data collection steadily increases over time or suddenly increased in response to a particular configuration change.
Select **Additional Queries** for pre-built queries that help you further understand your data patterns.
Analyze the amount of billable data collect from a virtual machine or set of vir
> [!WARNING] > Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
-**Billable data volume by computer**
+**Billable data volume by computer for the last full day**
```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, Computer, Type
+find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _BilledSize, _IsBillable, Computer, Type
| where _IsBillable == true and Type != "Usage" | extend computerName = tolower(tostring(split(Computer, '.')[0])) | summarize BillableDataBytes = sum(_BilledSize) by computerName | sort by BillableDataBytes desc nulls last ```
-**Count of billable events by computer**
+**Count of billable events by computer for the last full day**
```kusto
-find where TimeGenerated > ago(24h) project _IsBillable, Computer, Type
+find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _IsBillable, Computer, Type
| where _IsBillable == true and Type != "Usage" | extend computerName = tolower(tostring(split(Computer, '.')[0])) | summarize eventCount = count() by computerName
Analyze the amount of billable data collected from a particular resource or set
> [!WARNING] > Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
-**Billable data volume by resource ID**
+**Billable data volume by resource ID for the last full day**
```kusto
-find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
+find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true | summarize BillableDataBytes = sum(_BilledSize) by _ResourceId | sort by BillableDataBytes nulls last ```
-**Billable data volume by resource group**
+**Billable data volume by resource group for the last full day**
```kusto
-find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
+find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true | summarize BillableDataBytes = sum(_BilledSize) by _ResourceId | extend resourceGroup = tostring(split(_ResourceId, "/")[4] )
It may be helpful to parse the **_ResourceId** :
resourceGroup "/providers/" provider "/" resourceType "/" resourceName ```
-**Billable data volume by subscription**
+**Billable data volume by subscription for the last full day**
```kusto
-find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, _SubscriptionId
+find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project _BilledSize, _IsBillable, _SubscriptionId
| where _IsBillable == true | summarize BillableDataBytes = sum(_BilledSize) by _SubscriptionId | sort by BillableDataBytes nulls last ```+
+> [!TIP]
+> For workspaces with large data volumes, doing queries such as shown in this section -- which query large volumes of raw data -- might need to be restricted to a single day. To track trends over time, consider settting up a [Power BI report](./log-powerbi.md) and using [incremental refresh](./log-powerbi.md#collect-data-with-power-bi-dataflows) to collect data volumes per resource once a day.
+ ## Querying for common data types If you find that you have excessive billable data for a particular data type, then you may need to perform a query to analyze data in that table. The following queries provide samples for some common data types:
There are two approaches to investigating the amount of data collected for Appli
> [!NOTE]
-> The queries in this section will work for both a workspace-based and classic Application Insights resource since [backwards compatibility](../app/convert-classic-resource.md#understanding-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** from the **Log Analytics workspace** menu. For a classic resource, open **Logs** from the **Application Insights** menu.
+> Queries against Application Insights table except `SystemEvents` will work for both a workspace-based and classic Application Insights resource, since [backwards compatibility](../app/convert-classic-resource.md#understanding-log-queries) allows you to continue to use [legacy table names](../app/apm-tables.md). For a workspace-based resource, open **Logs** from the **Log Analytics workspace** menu. For a classic resource, open **Logs** from the **Application Insights** menu.
-
-**Operations generate the most data volume in the last 30 days (workspace-based or classic)**
+**Dependency operations generate the most data volume in the last 30 days (workspace-based or classic)**
```kusto dependencies
dependencies
| render barchart ``` -
-**Data volume ingested in the last 24 hours (classic)**
-
-```kusto
-systemEvents
-| where timestamp >= ago(24h)
-| where type == "Billing"
-| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
-| extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
-| summarize sum(BillingTelemetrySizeInBytes)
-```
-
-**Data volume by type ingested in the last 24 hours (classic)**
+**Daily data volume by type for this Application Insights resource the last 7 days (classic only)**
```kusto systemEvents
-| where timestamp >= startofday(ago(30d))
+| where timestamp >= startofday(ago(7d)) and timestamp < startofday(now())
| where type == "Billing" | extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"]) | extend BillingTelemetrySizeInBytes = todouble(measurements["BillingTelemetrySize"])
-| summarize sum(BillingTelemetrySizeInBytes) by BillingTelemetryType, bin(timestamp, 1d)
-| render barchart
-```
-
-**Count of event types ingested in the last 24 hours (classic)**
-
-```kusto
-systemEvents
-| where timestamp >= startofday(ago(30d))
-| where type == "Billing"
-| extend BillingTelemetryType = tostring(dimensions["BillingTelemetryType"])
-| summarize count() by BillingTelemetryType, bin(timestamp, 1d)
-| render barchart
+| summarize sum(BillingTelemetrySizeInBytes) by BillingTelemetryType, bin(timestamp, 1d)
``` - ### Data volume trends for workspace-based resources To look at the data volume trends for [workspace-based Application Insights resources](../app/create-workspace-resource.md), use a query that includes all of the Application insights tables. The following queries use the [tables names specific to workspace-based resources](../app/apm-tables.md#table-schemas).
-**Data volume trends for all Application Insights resources in a workspace for the last week**
+**Daily data volume by type for all Application Insights resources in a workspace for the 7 days**
```kusto
-union (AppAvailabilityResults),
- (AppBrowserTimings),
- (AppDependencies),
- (AppExceptions),
- (AppEvents),
- (AppMetrics),
- (AppPageViews),
- (AppPerformanceCounters),
- (AppRequests),
- (AppSystemEvents),
- (AppTraces)
+union AppAvailabilityResults,
+ AppBrowserTimings,
+ AppDependencies,
+ AppExceptions,
+ AppEvents,
+ AppMetrics,
+ AppPageViews,
+ AppPerformanceCounters,
+ AppRequests,
+ AppSystemEvents,
+ AppTraces
| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now()) | summarize sum(_BilledSize) by _ResourceId, bin(TimeGenerated, 1d)
-| render areachart
```
-**Data volume trends for a specific Application Insights resources in a workspace for the last week**
+To look at the data volume trends for only a single Application Insights resource, add the following line before the `summarize` in the above query:
```kusto
-union (AppAvailabilityResults),
- (AppBrowserTimings),
- (AppDependencies),
- (AppExceptions),
- (AppEvents),
- (AppMetrics),
- (AppPageViews),
- (AppPerformanceCounters),
- (AppRequests),
- (AppSystemEvents),
- (AppTraces)
-| where TimeGenerated >= startofday(ago(7d)) and TimeGenerated < startofday(now())
| where _ResourceId contains "<myAppInsightsResourceName>"
-| summarize sum(_BilledSize) by Type, bin(TimeGenerated, 1d)
-| render areachart
``` -
+> [!TIP]
+> For workspaces with large data volumes, doing queries such as this one above which query large volumes of raw data might need to be restricted to a single day. To track trends over time, consider settting up a [Power BI report](./log-powerbi.md) and using [incremental refresh](./log-powerbi.md#collect-data-with-power-bi-dataflows) to collect data volumes per resource once a day.
## Understanding nodes sending data If you don't have excessive data from any particular source, you may have an excessive number of agents that are sending data.
-> [!WARNING]
-> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
-- **Count of agent nodes that are sending a heartbeat each day in the last month** ```kusto
Heartbeat
| render timechart ```
+> [!WARNING]
+> Use [find](/azure/data-explorer/kusto/query/findoperator?pivots=azuremonitor) queries sparingly because scans across data types are [resource intensive](./query-optimization.md#query-details-pane) to execute. If you don't need results per subscription, resource group, or resource name, use the [Usage](/azure/azure-monitor/reference/tables/usage) table as in the queries above.
**Count of nodes sending any data in the last 24 hours** ```kusto
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 08/19/2022 Last updated : 08/29/2022 # Solution architectures using Azure NetApp Files
This section provides references for solutions for Linux OSS applications and da
* [Cloudera Machine Learning](https://docs.cloudera.com/machine-learning/cloud/requirements-azure/topics/ml-requirements-azure.html) * [Distributed training in Azure: Lane detection - Solution design](https://www.netapp.com/media/32427-tr-4896-design.pdf) * [Distributed training in Azure: Click-Through Rate Prediction ΓÇô Solution design](https://docs.netapp.com/us-en/netapp-solutions/ai/aks-anf_introduction.html)
-* [How to use Azure Machine Learning with Azure NetApp Files](https://github.com/csiebler/azureml-with-azure-netapp-files)
### Education
This section provides solutions for Azure platform services.
* [Protecting applications on private Azure Kubernetes Service clusters with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/protecting-applications-on-private-azure-kubernetes-service/ba-p/3289422) * [Providing Disaster Recovery to CloudBees-Jenkins in AKS with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/providing-disaster-recovery-to-cloudbees-jenkins-in-aks-with/ba-p/3553412)
+### Azure Machine Learning
+
+* [High-performance storage for AI Model Training tasks using Azure ML studio with Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-architecture-blog/high-performance-storage-for-ai-model-training-tasks-using-azure/ba-p/3609189#_Toc112321755)
+* [How to use Azure Machine Learning with Azure NetApp Files](https://github.com/csiebler/azureml-with-azure-netapp-files)
+ ### Azure Red Hat Openshift * [Using Trident to Automate Azure NetApp Files from OpenShift](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/using-trident-to-automate-azure-netapp-files-from-openshift/ba-p/2367351)
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 08/19/2022 Last updated : 08/29/2022
Azure NetApp Files backup is supported for the following regions:
* Japan East * North Europe * South Central US
+* UK South
* West Europe * West US * West US 2
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 08/15/2022 Last updated : 08/29/2022 # Move operation support for resources
Jump to a resource provider namespace:
> - [Microsoft.KubernetesConfiguration](#microsoftkubernetesconfiguration) > - [Microsoft.Kusto](#microsoftkusto) > - [Microsoft.LabServices](#microsoftlabservices)
+> - [Microsoft.LoadTestService](#microsoftloadtestservice)
> - [Microsoft.LocationBasedServices](#microsoftlocationbasedservices) > - [Microsoft.LocationServices](#microsoftlocationservices) > - [Microsoft.Logic](#microsoftlogic)
Jump to a resource provider namespace:
> Make sure moving to new subscription doesn't exceed [subscription quotas](azure-subscription-service-limits.md#azure-monitor-limits). > [!WARNING]
-> When moving a workspace-based Application Insights component to a different subscription, telemetry stored in the original subscription will not be accessible anymore. This is because telemetry is identified by the Application Insights resource ID, which changes when you move the component to a different subscription. Please notice that once moved, there is no way to retrieve telemetry from the original subscription.
+> Moving or renaming any Application Insights resource changes the resource ID. When the ID changes for a workspace-based resource, data sent for the prior ID is accessible only by querying the underlying Log Analytics workspace. The data will not be accessible from within the renamed or moved Application Insights resource.
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move |
Jump to a resource provider namespace:
> | labaccounts | No | No | No | > | users | No | No | No |
+## Microsoft.LoadTestService
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Resource group | Subscription | Region move |
+> | - | -- | - | -- |
+> | loadtests | No | No | No |
+ ## Microsoft.LocationBasedServices > [!div class="mx-tableFixed"]
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
Use a stand to hold the script. Avoid angling the stand so that it can reflect s
The person operating the recording equipment ΓÇö the recording engineer ΓÇö should be in a separate room from the talent, with some way to talk to the talent in the recording booth (a *talkback circuit*).
-The recording should contain as little noise as possible, with a goal of an 80-dB signal-to-noise ratio or better.
+The recording should contain as little noise as possible, with a goal of -80 dB.
Listen closely to a recording of silence in your "booth," figure out where any noise is coming from, and eliminate the cause. Common sources of noise are air vents, fluorescent light ballasts, traffic on nearby roads, and equipment fans (even notebook PCs might have fans). Microphones and cables can pick up electrical noise from nearby AC wiring, usually a hum or buzz. A buzz can also be caused by a *ground loop*, which is caused by having equipment plugged into more than one electrical circuit.
cognitive-services Cognitive Services For Big Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/cognitive-services-for-big-data.md
Using the resources and libraries described in this article, you can embed conti
## Features and benefits
-Cognitive Services for big data can use resources from any [supported region](https://azure.microsoft.comglobal-infrastructure/services/?products=cognitive-services), as well as [containerized Cognitive Services](../cognitive-services-container-support.md). Containers support low or no connectivity deployments with ultra-low latency responses. Containerized Cognitive Services can be run locally, directly on the worker nodes of your Spark cluster, or on an external orchestrator like Kubernetes.
+Cognitive Services for big data can use resources from any [supported region](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services), as well as [containerized Cognitive Services](../cognitive-services-container-support.md). Containers support low or no connectivity deployments with ultra-low latency responses. Containerized Cognitive Services can be run locally, directly on the worker nodes of your Spark cluster, or on an external orchestrator like Kubernetes.
## Supported services
cognitive-services Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/getting-started.md
To get started on Azure Kubernetes Service, follow these steps:
## Try a sample
-After you set up your Spark cluster and environment, you can run a short sample. This sample assumes Azure Databricks and the `mmlspark.cognitive` package. For an example using `synapseml.cognitive`, see [Add search to AI-enriched data from Apache Spark using SynapseML](/search/search-synapseml-cognitive-services).
+After you set up your Spark cluster and environment, you can run a short sample. This sample assumes Azure Databricks and the `mmlspark.cognitive` package. For an example using `synapseml.cognitive`, see [Add search to AI-enriched data from Apache Spark using SynapseML](/azure/search/search-synapseml-cognitive-services).
First, you can create a notebook in Azure Databricks. For other Spark cluster providers, use their notebooks or Spark Submit.
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account.md
# Quickstart: Create a Cognitive Services resource using the Azure portal
-Use this quickstart to create a Cognitive Services resource. After you create a Cognitive Service resource in the Azure portal , you'll get an endpoint and a key for authenticating your applications.
+Use this quickstart to create a Cognitive Services resource. After you create a Cognitive Service resource in the Azure portal, you'll get an endpoint and a key for authenticating your applications.
Azure Cognitive Services are cloud-based services with REST APIs, and client library SDKs available to help developers build cognitive intelligence into applications without having direct artificial intelligence (AI) or data science skills or knowledge. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, understand, and even begin to reason.
The multi-service resource is named **Cognitive Services** in the portal. The mu
:::image type="content" source="media/cognitive-services-apis-create-account/resource_create_screen-multi.png" alt-text="Multi-service resource creation screen":::
-1. Configure additional settings for your resource as needed, read and accept the conditions (as applicable), and then select **Review + create**.
+1. Configure other settings for your resource as needed, read and accept the conditions (as applicable), and then select **Review + create**.
### [Decision](#tab/decision)
-1. You can select one of these links to create a Decision resource:
- - [Anomaly Detector](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector)
- - [Content Moderator](https://portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator)
- - [Metrics Advisor](https://portal.azure.com/#create/Microsoft.CognitiveServicesMetricsAdvisor)
- - [Personalizer](https://portal.azure.com/#create/Microsoft.CognitiveServicesPersonalizer)
-
-1. On the **Create** page, provide the following information:
-
- [!INCLUDE [Create Azure resource for subscription](./includes/quickstarts/cognitive-resource-project-details.md)]
-
-1. Configure additional settings for your resource as needed, read and accept the conditions (as applicable), and then select **Review + create**.
### [Language](#tab/language)
-1. You can select one of these links to create a Language resource:
- - [Immersive reader](https://portal.azure.com/#create/Microsoft.CognitiveServicesImmersiveReader)
- - [Language Understanding (LUIS)](https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne)
- - [Language service](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics)
- - [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation)
- - [QnA Maker](https://portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker)
-
-1. On the **Create** page, provide the following information:
-
- [!INCLUDE [Create Azure resource for subscription](./includes/quickstarts/cognitive-resource-project-details.md)]
-
-1. Configure additional settings for your resource as needed, read and accept the conditions (as applicable), and then select **Review + create**.
### [Speech](#tab/speech)
-1. You can select this link to create a Speech resource: [Speech Services](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)
+1. Select the following links to create a Speech resource:
+ - [Speech Services](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices)
1. On the **Create** page, provide the following information: [!INCLUDE [Create Azure resource for subscription](./includes/quickstarts/cognitive-resource-project-details.md)]
-1. Configure additional settings for your resource as needed, read and accept the conditions (as applicable), and then select **Review + create**.
+1. Select **Review + create**.
### [Vision](#tab/vision)
-1. You can select one of these links to create a Vision resource:
- - [Computer vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision)
- - [Custom vision service](https://portal.azure.com/#create/Microsoft.CognitiveServicesCustomVision)
- - [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace)
-
-1. On the **Create** page, provide the following information:
-
- [!INCLUDE [Create Azure resource for subscription](./includes/quickstarts/cognitive-resource-project-details.md)]
-
-1. Configure additional settings for your resource as needed, read and accept the conditions (as applicable), and then select **Review + create**.
If you want to clean up and remove a Cognitive Services subscription, you can de
1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Resource Groups** to display the list of your resource groups. 1. Locate the resource group containing the resource to be deleted. 1. If you want to delete the entire resource group, select the resource group name. On the next page, Select **Delete resource group**, and confirm.
-1. If you want to delete only the Cognitive Service resource, select the resource group to see all the resources within it. On the next page, select the resource that you want to delete, click the ellipsis menu for that row, and select **Delete**.
+1. If you want to delete only the Cognitive Service resource, select the resource group to see all the resources within it. On the next page, select the resource that you want to delete, select the ellipsis menu for that row, and select **Delete**.
If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md).
cognitive-services Concepts Exploration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-exploration.md
Last updated 08/28/2022
-# Exploration and Known
+# Exploration
With exploration, Personalizer is able to continuously deliver good results, even as user behavior changes.
Personalizer currently uses an algorithm called *epsilon greedy* to explore.
You configure the percentage of traffic to use for exploration in the Azure portal's **Configuration** page for Personalizer. This setting determines the percentage of Rank calls that perform exploration.
-Personalizer determines whether to explore or use the model's learned best action with this probability on each rank call. This is different than the behavior in some A/B frameworks that lock a treatment on specific user IDs.
+Personalizer determines whether to explore or use the model's most probable action on each rank call. This is different than the behavior in some A/B frameworks that lock a treatment on specific user IDs.
## Best practices for choosing an exploration setting
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
description: Azure Cosmos DB's point-in-time restore feature helps to recover da
Previously updated : 06/28/2022 Last updated : 08/24/2022
For example, if you have 1 TB of data in two regions then:
* Restore cost is calculated as (1000 \* 0.15) = $150 per restore > [!TIP]
-> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db). Continous 7-day tier does not incur charges for backup of the data.
+> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db). Continuous 7-day tier does not incur charges for backup of the data.
## Continuous 30-day tier vs Continuous 7-day tier
Currently the point in time restore functionality has the following limitations:
* Multi-regions write accounts aren't supported.
-* Currently Synapse Link isn't fully compatible with continuous backup mode. Click [here](analytical-store-introduction.md#backup) for more information.
+* Currently Azure Synapse Link isn't fully compatible with continuous backup mode. For more information about backup with analytical store, see [analytical store backup](analytical-store-introduction.md#backup).
* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account didn't exist.
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
description: Learn how to provision an account with continuous backup and point
Previously updated : 06/28/2022 Last updated : 08/24/2022
For PowerShell and CLI commands, the tier value is optional, if it isn't already
To provision an account with continuous backup, add the argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet assumes a single region write account, *Pitracct*, in the in *West US* region in the *MyRG* resource group. The account has continuous backup policy enabled. Continuous backup is configured at the ``Continous7days`` tier:
+The following cmdlet assumes a single region write account, *Pitracct*, in the in *West US* region in the *MyRG* resource group. The account has continuous backup policy enabled. Continuous backup is configured at the ``Continuous7days`` tier:
```azurepowershell New-AzCosmosDBAccount `
New-AzCosmosDBAccount `
To provision an account with continuous backup, add an argument `-BackupPolicyType Continuous` along with the regular provisioning command.
-The following cmdlet is an example of continuous backup policy with the ``Continous7days`` tier:
+The following cmdlet is an example of continuous backup policy with the ``Continuous7days`` tier:
```azurepowershell New-AzCosmosDBAccount `
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs.md
const container = client.database("myDatabase").container("myContainer");
const triggerId = "trgPreValidateToDoItemTimestamp"; await container.items.create({ category: "Personal",
- name = "Groceries",
- description = "Pick up strawberries",
- isComplete = false
+ name : "Groceries",
+ description : "Pick up strawberries",
+ isComplete : false
}, {preTriggerInclude: [triggerId]}); ```
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: Azure portal administration for direct Enterprise Agreements
description: This article explains the common tasks that a direct enterprise administrator accomplishes in the Azure portal. Previously updated : 08/03/2022 Last updated : 08/29/2022
EA admins and department administrators use departments to organize and report o
A department administrator can add new accounts to their departments. They can also remove accounts from their departments, but not from the enrollment.
-Check out the [EA admin manage departments](https://www.youtube.com/watch?v=NUlRrJFF1_U) video. It's part of the [Direct Enterprise Customer Billing Experience in the Azure portal](https://www.youtube.com/playlist?list=PLeZrVF6SXmsoHSnAgrDDzL0W5j8KevFIm) series of videos.
+Check out the [Manage departments in the Azure portal](https://www.youtube.com/watch?v=NUlRrJFF1_U) video.
->[!VIDEO https://www.youtube.com/embed/cxAtOSSE6UI]
+>[!VIDEO https://www.youtube.com/embed/vs3wIeRDK4Q]
### To create a department
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Title: View your Azure usage summary details and download reports for direct EA
description: This article explains how enterprise administrators of direct Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 08/08/2022 Last updated : 08/29/2022
The EA admin receives an invoice notification email after the end of billing per
If you want to update the PO number after your invoice is generated, then contact Azure support in the Azure portal.
+Check out the [Manage purchase order number in the Azure portal](https://www.youtube.com/watch?v=26aanfQfjaY) video.
+>[!VIDEO https://www.youtube.com/embed/26aanfQfjaY]
+ To update the PO number for a billing account: 1. Sign in to the ΓÇ»[Azure portal](https://portal.azure.com).
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
Previously updated : 07/04/2022 Last updated : 08/24/2022 # Copy and transform data in Azure Blob storage by using Azure Data Factory or Azure Synapse Analytics
These properties are supported for an Azure Blob storage linked service:
| serviceEndpoint | Specify the Azure Blob storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes | | accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No | | servicePrincipalId | Specify the application's client ID. | Yes |
-| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securelyFactory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering over the upper-right corner of the Azure portal. | Yes | | azureCloudType | For service principal authentication, specify the type of Azure cloud environment, to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No |
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
Previously updated : 07/28/2022 Last updated : 08/24/2022 # Copy and transform data in Snowflake using Azure Data Factory or Azure Synapse Analytics
To copy data from Snowflake, the following properties are supported in the Copy
| : | :-- | :- | | type | The type property of the Copy activity source must be set to **SnowflakeSource**. | Yes | | query | Specifies the SQL query to read data from Snowflake. If the names of the schema, table and columns contain lower case, quote the object identifier in query e.g. `select * from "schema"."myTable"`.<br>Executing stored procedure is not supported. | No |
-| exportSettings | Advanced settings used to retrieve data from Snowflake. You can configure the ones supported by the COPY into command that the service will pass through when you invoke the statement. | No |
+| exportSettings | Advanced settings used to retrieve data from Snowflake. You can configure the ones supported by the COPY into command that the service will pass through when you invoke the statement. | Yes |
| ***Under `exportSettings`:*** | | | | type | The type of export command, set to **SnowflakeExportCopyCommand**. | Yes | | additionalCopyOptions | Additional copy options, provided as a dictionary of key-value pairs. Examples: MAX_FILE_SIZE, OVERWRITE. For more information, see [Snowflake Copy Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html#copy-options-copyoptions). | No |
To use this feature, create an [Azure Blob storage linked service](connector-azu
], "typeProperties": { "source": {
- "type": "SnowflakeSource",
- "sqlReaderQuery": "SELECT * FROM MyTable"
+ "type": "SnowflakeSource",
+ "sqlReaderQuery": "SELECT * FROM MyTable",
+ "exportSettings": {
+ "type": "SnowflakeExportCopyCommand"
+ }
}, "sink": { "type": "<sink type>"
To copy data to Snowflake, the following properties are supported in the Copy ac
| :- | :-- | :-- | | type | The type property of the Copy activity sink, set to **SnowflakeSink**. | Yes | | preCopyScript | Specify a SQL query for the Copy activity to run before writing data into Snowflake in each run. Use this property to clean up the preloaded data. | No |
-| importSettings | Advanced settings used to write data into Snowflake. You can configure the ones supported by the COPY into command that the service will pass through when you invoke the statement. | No |
+| importSettings | Advanced settings used to write data into Snowflake. You can configure the ones supported by the COPY into command that the service will pass through when you invoke the statement. | Yes |
| ***Under `importSettings`:*** | | | | type | The type of import command, set to **SnowflakeImportCopyCommand**. | Yes | | additionalCopyOptions | Additional copy options, provided as a dictionary of key-value pairs. Examples: ON_ERROR, FORCE, LOAD_UNCERTAIN_FILES. For more information, see [Snowflake Copy Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#copy-options-copyoptions). | No |
If your source data store and format meet the criteria described in this section
"type": "SnowflakeImportCopyCommand", "copyOptions": { "FORCE": "TRUE",
- "ON_ERROR": "SKIP_FILE",
+ "ON_ERROR": "SKIP_FILE"
}, "fileFormatOptions": {
- "DATE_FORMAT": "YYYY-MM-DD",
+ "DATE_FORMAT": "YYYY-MM-DD"
} } }
To use this feature, create an [Azure Blob storage linked service](connector-azu
"type": "<source type>" }, "sink": {
- "type": "SnowflakeSink"
+ "type": "SnowflakeSink",
+ "importSettings": {
+ "type": "SnowflakeImportCopyCommand"
+ }
}, "enableStaging": true, "stagingSettings": {
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 08/10/2022 Last updated : 08/26/2022 # Manage Azure Data Factory studio preview experience
There are two ways to enable preview experiences.
[**Monitoring experimental view**](#monitoring-experimental-view) * [Simplified default monitoring view](#simplified-default-monitoring-view)
-### Dataflow data first experimental view
+### Dataflow data-first experimental view
UI (user interfaces) changes have been made to mapping data flows. These changes were made to simplify and streamline the dataflow creation process so that you can focus on what your data looks like. + The dataflow authoring experience remains the same as detailed [here](https://aka.ms/adfdataflows), except for certain areas detailed below.
+To see the data-first experimental view, you will need to follow these steps to enable it. By default, users will see the **Classic** style.
+
+> [!NOTE]
+> To enable the data-first view, you will need to enable the preview experience in your settings and you will need an active Data flow debug session.
+
+In your data flow editor, you can find several canvas tools on the right side like the **Search** tool, **Zoom** tool, and **Multi-select** tool.
++
+You will see a new icon under the **Multi-select** tool. This is how you can toggle between the **Classic** and the **Data-first** views.
++ #### Configuration panel The configuration panel for transformations has now been simplified. Previously, the configuration panel showed settings specific to the selected transformation.
Columns can be rearranged by dragging a column by its header. You can also sort
UI (user interface) changes have been made to activities in the pipeline editor canvas. These changes were made to simplify and streamline the pipeline creation process.
-#### Adding activities
+#### Adding activities to the canvas
You now have the option to add an activity using the Add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add.
Select an activity by using the search box or scrolling through the listed activ
You can now view the activities contained iteration and conditional activities. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-11.png" alt-text="Screenshot of all iteration and conditional activity containers.":::
-
+
+##### Adding Activities
+ You have two options to add activities to your iteration and conditional activities. 1. Use the + button in your container to add an activity.
You have two options to add activities to your iteration and conditional activit
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-18.png" alt-text="Screenshot of the drop-down list of activities in the right-most activity."::: Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the container.
-
+
+##### Adjusting activity size
+
+Your containerized activities can be viewed in two sizes. In the expanded size, you will be able to see all the activities in the container.
++
+To save space on your canvas, you can also collapse the containerized view using the **Minimize** arrows found in the top right corner of the activity.
++
+This will shrink the activity size and hide the nested activities.
++
+If you have multiple container activities, you can save time by collapsing or expanding all activities at once by right clicking on the canvas. This will bring up the option to hide all nested activities.
++
+Click **Hide nested activities** to collapse all containerized activities. To expand all the activities, click **Show nested activities**, found in the same list of canvas options.
++ ### Monitoring experimental view
databox Data Box Deploy Copy Data Via Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-nfs.md
Previously updated : 03/11/2022 Last updated : 08/26/2022 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
The following table shows the UNC path to the shares on your Data Box and Azure
| Azure Storage type| Data Box shares | |-|--|
-| Azure Block blobs | <li>UNC path to shares: `//<DeviceIPAddress>/<StorageAccountName_BlockBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Page blobs | <li>UNC path to shares: `//<DeviceIPAddres>/<StorageAccountName_PageBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Files |<li>UNC path to shares: `//<DeviceIPAddres>/<StorageAccountName_AzFile>/<ShareName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
-| Azure Block blobs (Archive) | <li>UNC path to shares: `//<DeviceIPAddres>/<StorageAccountName_BlockBlobArchive>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
+| Azure Block blobs | <li>UNC path to shares: `//<DeviceIPAddress>/<storageaccountname_BlockBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
+| Azure Page blobs | <li>UNC path to shares: `//<DeviceIPAddress>/<storageaccountname_PageBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
+| Azure Files |<li>UNC path to shares: `//<DeviceIPAddress>/<storageaccountname_AzFile>/<ShareName>/files/a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
+| Azure Block blobs (Archive) | <li>UNC path to shares: `//<DeviceIPAddress>/<storageaccountname_BlockBlobArchive>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
If you are using a Linux host computer, perform the following steps to configure Data Box to allow access to NFS clients.
databox Data Box Deploy Copy Data Via Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-rest.md
Previously updated : 07/02/2020 Last updated : 08/26/2022 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
databox Data Box Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data.md
Previously updated : 03/17/2022 Last updated : 08/26/2022 # Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.
The following table shows the UNC path to the shares on your Data Box and Azure
|Azure Storage types | Data Box shares | |-|--|
-| Azure Block blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<StorageAccountName_BlockBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Page blobs | <li>UNC path to shares: `\\<DeviceIPAddres>\<StorageAccountName_PageBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Files |<li>UNC path to shares: `\\<DeviceIPAddres>\<StorageAccountName_AzFile>\<ShareName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
-| Azure Block blobs (Archive) | <li>UNC path to shares: `\\<DeviceIPAddres>\<StorageAccountName_BlockBlobArchive>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
+| Azure Block blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
+| Azure Page blobs | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_PageBlob>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
+| Azure Files |<li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_AzFile>\<ShareName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
+| Azure Block blobs (Archive) | <li>UNC path to shares: `\\<DeviceIPAddress>\<storageaccountname_BlockBlobArchive>\<ContainerName>\files\a.txt`</li><li>Azure Storage URL: `https://<storageaccountname>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
If using a Windows Server host computer, follow these steps to connect to the Data Box.
If using a Windows Server host computer, follow these steps to connect to the Da
`net use \\<IP address of the device>\<share name> /u:<IP address of the device>\<user name for the share>` Depending upon your data format, the share paths are as follows:
- - Azure Block blob - `\\10.126.76.138\utSAC1_202006051000_BlockBlob`
- - Azure Page blob - `\\10.126.76.138\utSAC1_202006051000_PageBlob`
- - Azure Files - `\\10.126.76.138\utSAC1_202006051000_AzFile`
- - Azure Blob blob (Archive) - `\\10.126.76.138\utSAC0_202202241054_BlockBlobArchive`
+ - Azure Block blob - `\\10.126.76.138\utsac1_BlockBlob`
+ - Azure Page blob - `\\10.126.76.138\utsac1_PageBlob`
+ - Azure Files - `\\10.126.76.138\utsac1_AzFile`
+ - Azure Blob blob (Archive) - `\\10.126.76.138\utsac0_BlockBlobArchive`
4. Enter the password for the share when prompted. If the password has special characters, add double quotation marks before and after it. The following sample shows connecting to a share via the preceding command.
If using a Windows Server host computer, follow these steps to connect to the Da
If using a Linux client, use the following command to mount the SMB share. The "vers" parameter below is the version of SMB that your Linux host supports. Plug in the appropriate version in the command below. For versions of SMB that the Data Box supports see [Supported file systems for Linux clients](./data-box-system-requirements.md#supported-file-transfer-protocols-for-clients) ```console
-sudo mount -t nfs -o vers=2.1 10.126.76.138:/utSAC1_202006051000_BlockBlob /home/databoxubuntuhost/databox
+sudo mount -t nfs -o vers=2.1 10.126.76.138:/utsac1_BlockBlob /home/databoxubuntuhost/databox
``` ## Copy data to Data Box
ddos-protection Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/diagnostic-logging.md
na Previously updated : 12/28/2020 Last updated : 08/29/2022
-# View and configure DDoS diagnostic logging
+# Tutorial: View and configure DDoS diagnostic logging
Azure DDoS Protection standard provides detailed attack insights and visualization with DDoS Attack Analytics. Customers protecting their virtual networks against DDoS attacks have detailed visibility into attack traffic and actions taken to mitigate the attack via attack mitigation reports & mitigation flow logs. Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection), Splunk (Azure Event Hubs), OMS Log Analytics, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.
The following diagnostic logs are available for Azure DDoS Protection Standard:
- **DDoSProtectionNotifications**: Notifications will notify you anytime a public IP resource is under attack, and when attack mitigation is over. - **DDoSMitigationFlowLogs**: Attack mitigation flow logs allow you to review the dropped traffic, forwarded traffic and other interesting datapoints during an active DDoS attack in near-real time. You can ingest the constant stream of this data into Microsoft Sentinel or to your third-party SIEM systems via event hub for near-real time monitoring, take potential actions and address the need of your defense operations.-- **DDoSMitigationReports**: Attack mitigation reports uses the Netflow protocol data which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you will be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over.
+- **DDoSMitigationReports**: Attack mitigation reports use the Netflow protocol data, which is aggregated to provide detailed information about the attack on your resource. Anytime a public IP resource is under attack, the report generation will start as soon as the mitigation starts. There will be an incremental report generated every 5 mins and a post-mitigation report for the whole mitigation period. This is to ensure that in an event the DDoS attack continues for a longer duration of time, you'll be able to view the most current snapshot of mitigation report every 5 minutes and a complete summary once the attack mitigation is over.
- **AllMetrics**: Provides all possible metrics available during the duration of a DDoS attack. In this tutorial, you'll learn how to:
If you want to automatically enable diagnostic logging on all public IPs within
5. Select **Public IP Address** for **Resource type**, then select the specific public IP address you want to enable logs for. 6. Select **Add diagnostic setting**. Under **Category Details**, select as many of the following options you require, and then select **Save**.
- ![DDoS Diagnostic Settings](./media/ddos-attack-telemetry/ddos-diagnostic-settings.png)
+ :::image type="content" source="./media/ddos-attack-telemetry/ddos-diagnostic-settings.png" alt-text="Screenshot of DDoS diagnostic settings." lightbox="./media/ddos-attack-telemetry/ddos-diagnostic-settings.png":::
+
7. Under **Destination details**, select as many of the following options as you require: - **Archive to a storage account**: Data is written to an Azure Storage account. To learn more about this option, see [Archive resource logs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-storage).
If you want to automatically enable diagnostic logging on all public IPs within
4. Under **General**, click on **Logs**
-5. In Query explorer, type in the following Kusto Query and change the time range to Custom and change the time range to last 3 months. Then hit Run.
+5. In Query explorer, type in the following Kusto Query and change the time range to Custom and change the time range to last three months. Then hit Run.
```kusto AzureDiagnostics
This [built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/Po
You can connect logs to Microsoft Sentinel, view and analyze your data in workbooks, create custom alerts, and incorporate it into investigation processes. To connect to Microsoft Sentinel, see [Connect to Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection).
-![Microsoft Sentinel DDoS Connector](./media/ddos-attack-telemetry/azure-sentinel-ddos.png)
+ ### Azure DDoS Protection Workbook
You can use [this Azure Resource Manager (ARM) template](https://aka.ms/ddoswork
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%20DDoS%20Protection%2FWorkbook%20-%20Azure%20DDOS%20monitor%20workbook%2FAzureDDoSWorkbook_ARM.json)
-![DDoS Protection Workbook](./media/ddos-attack-telemetry/ddos-attack-analytics-workbook.png)
+ ## Validate and test
defender-for-iot Concept Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-event-aggregation.md
The data collected for each event is:
| **Transport_protocol** | Can be TCP, UDP, or ICMP. | | **Application protocol** | The application protocol associated with the connection. | | **Extended properties** | The Additional details of the connection. For example, `host name`. |
-| **DNS hit count** | Total hit count of DNS requests |
+| **Hit count** | The count of packets observed |
## Login collector (event-based collector)
The following data is collected:
| **user_name** | The Linux user. | | **executable** | The terminal device. For example, `tty1..6` or `pts/n`. | | **remote_address** | The source of connection, either a remote IP address in IPv6 or IPv4 format, or `127.0.0.1/0.0.0.0` to indicate local connection. |
-| **Login_UsePAM** | Boolean: <br>- **True**: Only the PAM Login collector is used <br>- **False**: The UTMP Login collector is used, with SYSLOG if SYSLOG is enabled |
## System Information (trigger-based collector)
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
This section describes permissions available to sensor Administrators, Security
| Manage alerts: acknowledge, learn, and pin | | Γ£ô | Γ£ô | | View events in a timeline | | Γ£ô | Γ£ô | | Authorize devices, known scanning devices, programming devices | | Γ£ô | Γ£ô |
+| Delete devices | | | Γ£ô |
| View investigation data | Γ£ô | Γ£ô | Γ£ô | | Manage system settings | | | Γ£ô | | Manage users | | | Γ£ô |
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
The installation process checks to see if the required Docker version is already
<a name="install"></a>**To install the sensor**:
-1. On your physical appliance or VM, sign in to the sensor's CLI using a terminal, such as PUTTY, or MobaXterm.
+1. On your physical appliance or VM, sign in to the sensor's CLI using a terminal, such as PuTTY, or MobaXterm.
1. Run the command that you'd saved from the Azure portal. For example:
devtest Troubleshoot Expired Removed Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/troubleshoot-expired-removed-subscription.md
ms.prod: visual-studio-windows
-# Renew an expired subscription, purchase a new on, or transfer your Azure resources
+# Renew an expired subscription, purchase a new one, or transfer your Azure resources
If your Visual Studio subscription expires or is removed, all the subscription benefits, including the monthly Azure dev/test individual credit are no longer available. To continue using Azure with a monthly credit, you will need to renew your subscription, purchase a new subscription, and/or transfer your Azure resources to a different Azure subscription that includes the Azure dev/test individual credit.
There are several ways to continue using a monthly credit for Azure. To save you
## Convert your Azure subscription to pay-as-you-go
-If you no longer need a Visual Studio subscription or credit but you want to continue using your Azure resources, convert your Azure subscription to pay-as-you-go pricing by [removing your spending limit](../../cost-management-billing/manage/spending-limit.md#remove-the-spending-limit-in-azure-portal).
+If you no longer need a Visual Studio subscription or credit but you want to continue using your Azure resources, convert your Azure subscription to pay-as-you-go pricing by [removing your spending limit](../../cost-management-billing/manage/spending-limit.md#remove-the-spending-limit-in-azure-portal).
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
Previously updated : 06/14/2022 Last updated : 08/29/2022
IDPS signature rules have the following properties:
|Signature ID |Internal ID for each signature. This ID is also presented in Azure Firewall Network Rules logs.| |Mode |Indicates if the signature is active or not, and whether firewall will drop or alert upon matched traffic. The below signature mode can override IDPS mode<br>- **Disabled**: The signature isn't enabled on your firewall.<br>- **Alert**: You'll receive alerts when suspicious traffic is detected.<br>- **Alert and Deny**: You'll receive alerts and suspicious traffic will be blocked. Few signature categories are defined as ΓÇ£Alert OnlyΓÇ¥, therefore by default, traffic matching their signatures won't be blocked even though IDPS mode is set to ΓÇ£Alert and DenyΓÇ¥. Customers may override this by customizing these specific signatures to ΓÇ£Alert and DenyΓÇ¥ mode. <br><br> Note: IDPS alerts are available in the portal via network rule log query.| |Severity |Each signature has an associated severity level that indicates the probability that the signature is an actual attack.<br>- **Low**: An abnormal event is one that doesn't normally occur on a network or Informational events are logged. Probability of attack is low.<br>- **Medium**: The signature indicates an attack of a suspicious nature. The administrator should investigate further.<br>- **High**: The attack signatures indicate that an attack of a severe nature is being launched. There's little probability that the packets have a legitimate purpose.|
-|Direction |The traffic direction for which the signature is applied.<br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined in Azure private IP range (according to IANA RFC 1918).<br>- **Outbound**: Signature is applied only on traffic sent from Azure private IP range (according to IANA RFC 1918) to the Internet.<br>- **Bidirectional**: Signature is always applied on any traffic direction.|
+|Direction |The traffic direction for which the signature is applied.<br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](firewall-preview.md#idps-private-ip-ranges-preview).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](firewall-preview.md#idps-private-ip-ranges-preview) to the Internet.<br>- **Bidirectional**: Signature is always applied on any traffic direction.|
|Group |The group name that the signature belongs to.| |Description |Structured from the following three parts:<br>- **Category name**: The category name that the signature belongs to as described in [Azure Firewall IDPS signature rule categories](idps-signature-categories.md).<br>- High level description of the signature<br>- **CVE-ID** (optional) in the case where the signature is associated with a specific CVE. The ID is listed here.| |Protocol |The protocol associated with this signature.|
hdinsight Enable Private Link On Kafka Rest Proxy Hdi Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enable-private-link-on-kafka-rest-proxy-hdi-cluster.md
+
+ Title: Enable Private Link on an HDInsight Kafka Rest Proxy cluster
+description: Learn how to Enable Private Link on an HDInsight Kafka Rest Proxy cluster.
++++ Last updated : 08/30/2022++
+# Enable Private Link on an HDInsight Kafka Rest Proxy cluster
+
+Follow these extra steps to enable private link for Kafka Rest Proxy HDI clusters.
+
+## Prerequisites
+
+As a prerequisite, complete the steps mentioned in [Enable Private Link on an HDInsight cluster document](./hdinsight-private-link.md), then perform the below steps.
+
+## Create private endpoints
+
+1. Click 'Create private endpoint' and use the following configurations to set up another Ambari private endpoint:
+
+ | Config | Value |
+ | | -- |
+ | Name | hdi-privlink-cluster-1 |
+ | Resource type | Microsoft.Network/privatelinkServices |
+ | Resource | kafkamanagementnode-* (This value should match the HDI deployment ID of your cluster, for example kafkamanagementnode 4eafe3a2a67e4cd88762c22a55fe4654) |
+ | Virtual network | hdi-privlink-client-vnet |
+ | Subnet | default |
+
+## Configure DNS to connect over private endpoints
+
+1. Add another record set to the Private DNS for Ambari.
+
+ | Config | Value |
+ | | -- |
+ | Name | YourPrivatelinkClusterName-1 |
+ | Type | A - Alias record to 1Pv4 address |
+ | TTL | 1 |
+ | TTL unit | Hours |
+ | IP Address | Private IP of private endpoint for Ambari access |
+
+## Next steps
+
+* [Enterprise Security Package for Azure HDInsight](enterprise-security-package.md)
+* [Enterprise security general information and guidelines in Azure HDInsight](./domain-joined/general-guidelines.md)
hdinsight Hdinsight Troubleshoot Data Lake Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-data-lake-files.md
Title: Unable to access Data Lake storage files in Azure HDInsight
description: Unable to access Data Lake storage files in Azure HDInsight Previously updated : 08/13/2019 Last updated : 08/28/2022 # Unable to access Data Lake storage files in Azure HDInsight
Execute the PowerShell command after you substitute the parameters with the actu
## Next steps
hdinsight Hdinsight Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-private-link.md
Title: Enable Private Link on an Azure HDInsight cluster
description: Learn how to connect to an outside HDInsight cluster by using Azure Private Link. Previously updated : 06/08/2022++ Last updated : 08/30/2022 # Enable Private Link on an HDInsight cluster In this article, you'll learn about using Azure Private Link to connect to an HDInsight cluster privately across networks over the Microsoft backbone network. This article is an extension of the article [Restrict cluster connectivity in Azure HDInsight](./hdinsight-restrict-public-connectivity.md), which focuses on restricting public connectivity. If you want public connectivity to or within your HDInsight clusters and dependent resources, consider restricting the connectivity of your cluster by following guidelines in [Control network traffic in Azure HDInsight](./control-network-traffic.md).
-Private Link can be used in cross-network scenarios where virtual network peering is not available or enabled.
+Private Link can be used in cross-network scenarios where virtual network peering isn't available or enabled.
> [!NOTE] > Restricting public connectivity is a prerequisite for enabling Private Link and shouldn't be considered the same capability. The use of Private Link to connect to an HDInsight cluster is an optional feature and is disabled by default. The feature is available only when the `resourceProviderConnection` network property is set to *outbound*, as described in the article [Restrict cluster connectivity in Azure HDInsight](./hdinsight-restrict-public-connectivity.md).
-When `privateLink` is set to *enabled*, internal [standard load balancers](../load-balancer/load-balancer-overview.md) (SLBs) are created, and an Azure Private Link service is provisioned for each SLB. The Private Link service is what allows you to access the HDInsight cluster from private endpoints.
+When `privateLink` is set as *enabled*, internal [standard load balancers](../load-balancer/load-balancer-overview.md) (SLBs) are created, and an Azure Private Link service is provisioned for each SLB. The Private Link service is what allows you to access the HDInsight cluster from private endpoints.
## Private link deployment steps
-Successfully creating a Private Link cluster takes many steps, so we have outlined them here. Follow each of the steps below to ensure everything is setup correctly.
-
-### [Step 1: Create prerequisites](#Createpreqs)
-### [Step 2: Configure HDInsight subnet](#DisableNetworkPolicy)
-### [Step 3: Deploy NAT gateway or firewall](#NATorFirewall)
-### [Step 4: Deploy private link cluster](#deployCluster)
-### [Step 5: Create private endpoints](#PrivateEndpoints)
-### [Step 6: Configure DNS to connect over private endpoints](#ConfigureDNS)
-### [Step 7: Check cluster connectivity](#CheckConnectivity)
-### [Appendix: Manage private endpoints for HDInsight](#ManageEndpoints)
+Successfully creating a Private Link cluster takes many steps, so we've outlined them here. Follow each of the steps below to ensure everything is set up correctly.
## <a name="Createpreqs"></a>Step 1: Create prerequisites
-To start, deploy the following resources if you have not created them already. Once this is done you should have at least 1 resource group, 2 virtual networks, and a network security group to attach to the subnet where the HDInsight cluster will be deployed as shown below.
+To start, deploy the following resources if you haven't created them already. You need to have at least one resource group, two virtual networks, and a network security group to attach to the subnet where the HDInsight cluster will be deployed as shown below.
|Type|Name|Purpose| |-|-|-|
You can opt to use a NAT gateway if you don't want to configure a firewall or a
For a basic setup to get started: 1. Search for 'NAT Gateways' in the Azure portal and click **Create**.
-2. Use the following configurations in the NAT Gateway. (We are not including all configs here, so you can use the default value for those)
+2. Use the following configurations in the NAT Gateway. (We aren't including all configs here, so you can use the default values.)
| Config | Value | | | -- |
For a basic setup to get started:
| Virtual network | hdi-privlink-cluster-vnet | | Subnet name | default |
-3. Once the NAT Gateway is finished deploying, you are ready to go to the next step.
+3. Once the NAT Gateway is finished deploying, you're ready to go to the next step.
### Configure a firewall (Option 2) For a basic setup to get started:
For a basic setup to get started:
1. Use the new firewall's private IP address as the `nextHopIpAddress` value in your route table. 1. Add the route table to the configured subnet of your virtual network.
-Your HDInsight cluster still needs access to its outbound dependencies. If these outbound dependencies are not allowed, cluster creation might fail.
+Your HDInsight cluster still needs access to its outbound dependencies. If these outbound dependencies aren't allowed, cluster creation might fail.
For more information on setting up a firewall, see [Control network traffic in Azure HDInsight](./control-network-traffic.md). ## <a name="deployCluster"></a>Step 4: Deploy private link cluster
-At this point all prerequisites should be taken care of and you are ready to deploy the Private Link cluster. The following diagram shows an example of the networking configuration that's required before you create the cluster. In this example, all outbound traffic is forced to Azure Firewall through a user-defined route. The required outbound dependencies should be allowed on the firewall before cluster creation. For Enterprise Security Package clusters, virtual network peering can provide the network connectivity to Azure Active Directory Domain Services.
+At this point, all prerequisites should be taken care of and you're ready to deploy the Private Link cluster. The following diagram shows an example of the networking configuration that's required before you create the cluster. In this example, all outbound traffic is forced to Azure Firewall through a user-defined route. The required outbound dependencies should be allowed on the firewall before cluster creation. For Enterprise Security Package clusters, virtual network peering can provide the network connectivity to Azure Active Directory Domain Services.
:::image type="content" source="media/hdinsight-private-link/before-cluster-creation.png" alt-text="Diagram of the Private Link environment before cluster creation.":::
To create a cluster by using the Azure CLI, see the [example](/cli/azure/hdinsig
## <a name="PrivateEndpoints"></a>Step 5: Create private endpoints
-Azure automatically creates a Private link service for the Ambari and SSH load balancers during the Private Link cluster deployment. After the cluster is deployed, you have to create two Private endpoints on the client VNET(s), one for Ambari and one for SSH access. Then, link them to the Private link services which were created as part of the cluster deployment.
+Azure automatically creates a Private link service for the Ambari and SSH load balancers during the Private Link cluster deployment. After the cluster is deployed, you have to create two Private endpoints on the client VNET(s), one for Ambari and one for SSH access. Then, link them to the Private link services that were created as part of the cluster deployment.
To create the private endpoints: 1. Open the Azure portal and search for 'Private link'. 2. In the results, click the Private link icon.
-3. Click 'Create private endpoint' and use the following configurations to setup the Ambari private endpoint:
+3. Click 'Create private endpoint' and use the following configurations to set up the Ambari private endpoint:
| Config | Value | | | -- | | Name | hdi-privlink-cluster | | Resource type | Microsoft.Network/privateLinkServices |
- | Resource | gateway-* (This should match the HDI deployment ID of your cluster, for example gateway-4eafe3a2a67e4cd88762c22a55fe4654) |
+ | Resource | gateway-* (This value should match the HDI deployment ID of your cluster, for example gateway-4eafe3a2a67e4cd88762c22a55fe4654) |
| Virtual network | hdi-privlink-client-vnet | | Subnet | default |
To create the private endpoints:
| | -- | | Name | hdi-privlink-cluster-ssh | | Resource type | Microsoft.Network/privateLinkServices |
- | Resource | headnode-* (This should match the HDI deployment ID of your cluster, for example headnode-4eafe3a2a67e4cd88762c22a55fe4654) |
+ | Resource | headnode-* (This value should match the HDI deployment ID of your cluster, for example headnode-4eafe3a2a67e4cd88762c22a55fe4654) |
| Virtual network | hdi-privlink-client-vnet | | Subnet | default |
+> [!IMPORTANT]
+> If you're using KafkaRestProxy HDInsight cluster, then follow this extra steps to [Enable Private Endpoints](./enable-private-link-on-kafka-rest-proxy-hdi-cluster.md#create-private-endpoints).
+>
+
+
Once the private endpoints are created, youΓÇÖre done with this phase of the setup. If you didnΓÇÖt make a note of the private IP addresses assigned to the endpoints, follow the steps below: 1. Open the client VNET in the Azure portal.
The following image shows an example of the private DNS entries configured to en
To configure DNS resolution through a Private DNS zone:
-1. Create an Azure Private DNS zone. (We are not including all configs here, all other configs are left at default values)
+1. Create an Azure Private DNS zone. (We aren't including all configs here, all other configs are left at default values)
| Config | Value | | | -- |
To configure DNS resolution through a Private DNS zone:
| TTL | 1 | | TTL unit | Hours | | IP Address | Private IP of private endpoint for SSH access |
+
+> [!IMPORTANT]
+> If you are using KafkaRestProxy HDInsight cluster, then follow this extra steps to [Configure DNS to connect over private endpoint](./enable-private-link-on-kafka-rest-proxy-hdi-cluster.md#configure-dns-to-connect-over-private-endpoints).
+>
4. Associate the private DNS zone with the client VNET by adding a Virtual Network Link. 1. Open the private DNS zone in the Azure portal.
To configure DNS resolution through a Private DNS zone:
1. Fill in the details: Link name, Subscription, and Virtual Network 1. Click **Save**.
-## <a name="CheckConnectivity"></a>Step 6: Check cluster connectivity
+## <a name="CheckConnectivity"></a>Step 7: Check cluster connectivity
-The last step is to test connectivity to the cluster. Since this cluster is isolated or private, we cannot access the cluster using any public IP or FQDN. Instead we have a couple of options:
+The last step is to test connectivity to the cluster. Since this cluster is isolated or private, we can't access the cluster using any public IP or FQDN. Instead we have a couple of options:
-* Set up VPN access to the client VNET from your on premise network
+* Set up VPN access to the client VNET from your on-premises network
* Deploy a VM to the client VNET and access the cluster from this VM
-For this example, we will deploy a VM in the client VNET using the following configuration to test the connectivity.
+For this example, we'll deploy a VM in the client VNET using the following configuration to test the connectivity.
| Config | Value | | | -- |
To test Ambari access: <br>
To test ssh access: <br> 1. Open a command prompt to get a terminal window. 2. In the terminal window, try connecting to your cluster with SSH: `ssh sshuser@<clustername>.azurehdinsight.net` (Replace "sshuser" with the ssh user you created for your cluster)
-3. If you are able to connect, the configuration is correct for SSH access.
+3. If you're able to connect, the configuration is correct for SSH access.
## <a name="ManageEndpoints"></a>Manage private endpoints for HDInsight
hdinsight Interactive Query Tutorial Analyze Flight Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-tutorial-analyze-flight-data.md
description: Tutorial - Learn how to extract data from a raw CSV dataset. Transf
Previously updated : 07/02/2019 Last updated : 08/28/2022 #Customer intent: As a data analyst, I need to load some data using Interactive Query, transform, and then export it to an Azure SQL database
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview-of-search.md
To help manage the returned resources, there are search result parameters that y
| _revinclude | Yes | Yes |Included items are limited to 100. _revinclude on PaaS and OSS on Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319) | | _summary | Yes | Yes | | _total | Partial | Partial | _total=none and _total=accurate |
-| _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. Note there's an open issue using _sort with chained search, which is documented in open-source issue [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
+| _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
| _contained | No | No | | _containedType | No | No | | _score | No | No |
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
In the following sections, we'll cover the various aspects of querying resources
## Search parameters
-When you do a search in FHIR, you are searching the database for resources that match certain search criteria. The FHIR API specifies a rich set of search parameters for fine-tuning search criteria. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. In a FHIR search API call, if a positive match is found between the request's search parameters and element values stored in a resource instance, then the FHIR server returns a bundle containing the resource instance(s) whose elements satisfied the search criteria.
+When you do a search in FHIR, you are searching the database for resources that match certain search criteria. The FHIR API specifies a rich set of search parameters for fine-tuning search criteria. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. In a FHIR search API call, if a positive match is found between the request's search parameters and the corresponding element values stored in a resource instance, then the FHIR server returns a bundle containing the resource instance(s) whose elements satisfied the search criteria.
For each search parameter, the FHIR specification defines the [data type(s)](https://www.hl7.org/fhir/search.html#ptypes) that can be used. Support in the FHIR service for the various data types is outlined below.
Similarly, you can do a reverse chained search with the `_has` parameter. This a
## Pagination
-As mentioned above, the results from a FHIR search will be available in paginated form at a link provided in the `searchset` bundle. By default, the FHIR service will display 10 search results per page, but this can be increased (or decreased) by setting the `_count` parameter. If there are more matches than fit on one page, the bundle will include a `next` link. Repeatedly fetching the `next` link will yield the subsequent pages of results. Note that the `_count` parameter value cannot exceed 1000.
+As mentioned above, the results from a FHIR search will be available in paginated form at a link provided in the `searchset` bundle. By default, the FHIR service will display 10 search results per page, but this can be increased (or decreased) by setting the `_count` parameter. If there are more matches than fit on one page, the bundle will include a `next` link. Repeatedly fetching from the `next` link will yield the subsequent pages of results. Note that the `_count` parameter value cannot exceed 1000.
Currently, the FHIR service in Azure Health Data Services only supports the `next` link and doesnΓÇÖt support `first`, `last`, or `previous` links in bundles returned from a search.
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/search-samples.md
GET {{FHIR_URL}}/Patient?general-practitioner:Practitioner.name=Sarah&general-pr
This would return all `Patient` resources that have a reference to "Sarah" as a `generalPractitioner` plus a reference to a `generalPractitioner` that has an address in the state of Washington. In other words, if a patient had a `generalPractitioner` named Sarah from New York state and another `generalPractitioner` named Bill from Washington state, this would meet the conditions for a positive match when doing this search.
-For scenarios in which the search criteria carries a logical AND condition that strictly checks for paired element values, refer to the **composite search** examples below.
+For scenarios in which the search requires a logical AND condition that strictly checks for paired element values, refer to the **composite search** examples below.
## Reverse chained search
GET {{FHIR_URL}}/Patient?_has:Observation:patient:_has:AuditEvent:entity:agent:P
## Composite search
-To search for resources that contain elements grouped together as logically connected pairs, FHIR defines composite search, which joins single parameter values together with the `$` operator ΓÇô making a connected pair of parameters. In a composite search, a positive match occurs when the intersection of element values satisfies all of the conditions set in the paired search parameters. For example, if you want to find all `DiagnosticReport` resources that contain a potassium value less than `9.2`:
+To search for resources that contain elements grouped together as logically connected pairs, FHIR defines composite search, which joins single parameter values together with the `$` operator ΓÇô forming a connected pair of parameters. In a composite search, a positive match occurs when the intersection of element values satisfies all conditions set in the paired search parameters. For example, if you want to find all `DiagnosticReport` resources that contain a potassium value less than `9.2`:
```rest GET {{FHIR_URL}}/DiagnosticReport?result.code-value-quantity=2823-3$lt9.2
iot-dps How To Send Additional Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-send-additional-data.md
This feature is available in C, C#, JAVA and Node.js client SDKs. To learn more
[IoT Plug and Play (PnP)](../iot-develop/overview-iot-plug-and-play.md) devices use the payload to send their model ID when they register with DPS. You can find examples of this usage in the PnP samples in the SDK or sample repositories. For example, [C# PnP thermostat](https://github.com/Azure-Samples/azure-iot-samples-csharp/blob/main/iot-hub/Samples/device/PnpDeviceSamples/Thermostat/Program.cs) or [Node.js PnP temperature controller](https://github.com/Azure/azure-iot-sdk-node/blob/main/device/samples/javascript/pnp_temperature_controller.js).
+## IoT Edge support
+
+Starting with version 1.4, IoT Edge supports sending a data payload contained in a JSON file. The payload file is read and sent to DPS when the device is (re)registered which typically happens when you run `iotedge config apply` for the first time. You can also force it to be re-read and registered by using the CLI's reprovision command `iotedge system reprovision`.
+
+Below is an example snippet from `/etc/aziot/config.toml` where the `payload` property is set to the path of a local JSON file.
+
+```toml
+ [provisioning]
+ source = "dps"
+ global_endpoint = "https://global.azure-devices-provisioning.net"
+ id_scope = "0ab1234C5D6"
+
+ # Uncomment to send a custom payload during DPS registration
+ payload = { uri = "file:///home/aziot/payload.json" }
+
+```
+
+The payload file (in this case `/home/aziot/payload/json`) can contain any valid JSON such as:
++
+```json
+{
+ "modelId": "dtmi:com:example:edgedevice;1"
+}
+```
+ ## Next steps
-* To learn how to provision devices using a custom allocation policy, see [How to use custom allocation policies](./how-to-use-custom-allocation-policies.md)
+* To learn how to provision devices using a custom allocation policy, see [How to use custom allocation policies](./how-to-use-custom-allocation-policies.md)
iot-edge Configure Connect Verify Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/configure-connect-verify-gpu.md
To create a GPU-optimized virtual machine (VM), choosing the right size is impor
Let's create an IoT Edge VM with the [Azure Resource Manager (ARM)](/azure/azure-resource-manager/management/overview) template in GitHub, then configure it to be GPU-optimized.
-1. Go to the IoT Edge VM deployment template in GitHub: [Azure/iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.3).
+1. Go to the IoT Edge VM deployment template in GitHub: [Azure/iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.4).
1. Select the **Deploy to Azure** button, which initiates the creation of a custom VM for you in the Azure portal.
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
# Configure an IoT Edge device to communicate through a proxy server IoT Edge devices send HTTPS requests to communicate with IoT Hub. If your device is connected to a network that uses a proxy server, you need to configure the IoT Edge runtime to communicate through the server. Proxy servers can also affect individual IoT Edge modules if they make HTTP or HTTPS requests that aren't routed through the IoT Edge hub.
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
# Connect a downstream device to an Azure IoT Edge gateway This article provides instructions for establishing a trusted connection between downstream devices and IoT Edge transparent gateways. In a transparent gateway scenario, one or more devices can pass their messages through a single gateway device that maintains the connection to IoT Hub.
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
You should already have IoT Edge installed on your device. If not, follow the st
pk = "file:///var/secrets/iot-edge-device-ca-gateway.key.pem" ```
-01. Verify your IoT Edge device uses the correct version of the IoT Edge agent when it starts. Find the **Default Edge Agent** section and set the image value for IoT Edge to version 1.3. For example:
+01. Verify your IoT Edge device uses the correct version of the IoT Edge agent when it starts. Find the **Default Edge Agent** section and set the image value for IoT Edge to version 1.4. For example:
```toml [agent.config]
- image: "mcr.microsoft.com/azureiotedge-agent:1.3"
+ image: "mcr.microsoft.com/azureiotedge-agent:1.4"
``` 01. The beginning of your parent configuration file should look similar to the following example.
To verify the *hostname*, you need to inspect the environment variables of the *
```output NAME STATUS DESCRIPTION CONFIG SimulatedTemperatureSensor running Up 5 seconds mcr.microsoft.com/azureiotedge-simulated-temperature-sensor:1.0
- edgeAgent running Up 17 seconds mcr.microsoft.com/azureiotedge-agent:1.3
- edgeHub running Up 6 seconds mcr.microsoft.com/azureiotedge-hub:1.3
+ edgeAgent running Up 17 seconds mcr.microsoft.com/azureiotedge-agent:1.4
+ edgeHub running Up 6 seconds mcr.microsoft.com/azureiotedge-hub:1.4
``` 01. Inspect the *edgeHub* container.
You should already have IoT Edge installed on your device. If not, follow the st
pk = "file:///var/secrets/iot-edge-device-ca-downstream.key.pem" ```
-01. Verify your IoT Edge device uses the correct version of the IoT Edge agent when it starts. Find the **Default Edge Agent** section and set the image value for IoT Edge to version 1.3. For example:
+01. Verify your IoT Edge device uses the correct version of the IoT Edge agent when it starts. Find the **Default Edge Agent** section and set the image value for IoT Edge to version 1.4. For example:
```toml [agent.config]
- image: "mcr.microsoft.com/azureiotedge-agent:1.3"
+ image: "mcr.microsoft.com/azureiotedge-agent:1.4"
``` 01. The beginning of your child configuration file should look similar to the following example.
The API proxy module was designed to be customized to handle most common gateway
"systemModules": { "edgeAgent": { "settings": {
- "image": "mcr.microsoft.com/azureiotedge-agent:1.3",
+ "image": "mcr.microsoft.com/azureiotedge-agent:1.4",
"createOptions": "{}" }, "type": "docker" }, "edgeHub": { "settings": {
- "image": "mcr.microsoft.com/azureiotedge-hub:1.3",
+ "image": "mcr.microsoft.com/azureiotedge-hub:1.4",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}]}}}" }, "type": "docker",
name = "edgeAgent"
type = "docker" [agent.config]
-image: "{Parent FQDN or IP}:443/azureiotedge-agent:1.3"
+image: "{Parent FQDN or IP}:443/azureiotedge-agent:1.4"
``` If you are using a local container registry, or providing the container images manually on the device, update the config file accordingly.
iot-edge How To Create Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-iot-edge-device.md
# Create an IoT Edge device This article provides an overview of the options available to you for installing and provisioning IoT Edge on your devices.
If you want more information about how to choose the right option for you, conti
:::moniker range=">=iotedge-2020-11" >[!NOTE]
->The following table reflects the supported scenarios for IoT Edge version 1.3. To see content about Windows containers, switch to the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
+>The following table reflects the supported scenarios for IoT Edge version 1.4. To see content about Windows containers, switch to the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
| | Linux containers on Linux hosts | |--| -- |
For Windows devices, the IoT Edge runtime is installed directly on the host devi
<!-- iotedge-2020-11 --> :::moniker range=">=iotedge-2020-11"
-IoT Edge version 1.3 doesn't support Windows containers. Windows containers are not supported beyond version 1.1. To learn more about IoT Edge with Windows containers, see the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
+IoT Edge version 1.4 doesn't support Windows containers. Windows containers are not supported beyond version 1.1. To learn more about IoT Edge with Windows containers, see the [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true) version of this article.
:::moniker-end <!--end iotedge-2020-11-->
iot-edge How To Create Test Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-test-certificates.md
# Create demo certificates to test IoT Edge device features IoT Edge devices require certificates for secure communication between the runtime, the modules, and any downstream devices. If you don't have a certificate authority to create the required certificates, you can use demo certificates to try out IoT Edge features in your test environment.
iot-edge How To Create Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-transparent-gateway.md
# Configure an IoT Edge device to act as a transparent gateway This article provides detailed instructions for configuring an IoT Edge device to function as a transparent gateway for other devices to communicate with IoT Hub. This article uses the term *IoT Edge gateway* to refer to an IoT Edge device configured as a transparent gateway. For more information, see [How an IoT Edge device can be used as a gateway](./iot-edge-as-gateway.md).
Now, you need to copy the certificates to the Azure IoT Edge for Linux on Window
Copy-EflowVMFile -fromFile <path>\certs\azure-iot-test-only.root.ca.cert.pem -toFile /home/iotedge-user/certs/certs/azure-iot-test-only.root.ca.cert.pem -pushFile ```
-1. Invoke the following commands on the EFLOW VM to grant iotedge permissions to the certificate files since `Copy-EflowVMFile` copies files with root only access permissions.
+1. Invoke the following commands on the EFLOW VM to grant *iotedge* permissions to the certificate files since `Copy-EflowVMFile` copies files with root only access permissions.
```powershell Invoke-EflowVmCommand "sudo chown -R iotedge /home/iotedge-user/certs/"
iot-edge How To Install Iot Edge Ubuntuvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-ubuntuvm.md
# Run Azure IoT Edge on Ubuntu Virtual Machines The Azure IoT Edge runtime is what turns a device into an IoT Edge device. The runtime can be deployed on devices as small as a Raspberry Pi or as large as an industrial server. Once a device is configured with the IoT Edge runtime, you can start deploying business logic to it from the cloud.
This article lists the steps to deploy an Ubuntu 18.04 LTS virtual machine with
On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/1.1/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session. :::moniker-end :::moniker range=">=iotedge-2020-11"
-This article lists the steps to deploy an Ubuntu 20.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.3) project repository.
+This article lists the steps to deploy an Ubuntu 20.04 LTS virtual machine with the Azure IoT Edge runtime installed and configured using a pre-supplied device connection string. The deployment is accomplished using a [cloud-init](../virtual-machines/linux/using-cloud-init.md) based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) maintained in the [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.4) project repository.
-On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/1.3/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
+On first boot, the virtual machine [installs the latest version of the Azure IoT Edge runtime via cloud-init](https://github.com/Azure/iotedge-vm-deploy/blob/1.4/cloud-init.txt). It also sets a supplied connection string before the runtime starts, allowing you to easily configure and connect the IoT Edge device without the need to start an SSH or remote desktop session.
:::moniker-end ## Deploy using Deploy to Azure Button
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
[![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.1%2FedgeDeploy.json) :::moniker-end :::moniker range=">=iotedge-2020-11"
- [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.3%2FedgeDeploy.json)
+ [![Deploy to Azure Button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2F1.4%2FedgeDeploy.json)
:::moniker-end 1. On the newly launched window, fill in the available form fields:
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
```azurecli-interactive az deployment group create \ --resource-group IoTEdgeResources \
- --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.3/edgeDeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.4/edgeDeploy.json" \
--parameters dnsLabelPrefix='my-edge-vm1' \ --parameters adminUsername='<REPLACE_WITH_USERNAME>' \ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id <REPLACE_WITH_DEVICE-NAME> --hub-name <REPLACE-WITH-HUB-NAME> -o tsv) \
The [Deploy to Azure Button](../azure-resource-manager/templates/deploy-to-azure
#Create a VM using the iotedge-vm-deploy script az deployment group create \ --resource-group IoTEdgeResources \
- --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.3/edgeDeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.4/edgeDeploy.json" \
--parameters dnsLabelPrefix='my-edge-vm1' \ --parameters adminUsername='<REPLACE_WITH_USERNAME>' \ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id <REPLACE_WITH_DEVICE-NAME> --hub-name <REPLACE-WITH-HUB-NAME> -o tsv) \
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
# Manage certificates on an IoT Edge device All IoT Edge devices use certificates to create secure connections between the runtime and any modules running on the device. IoT Edge devices functioning as gateways use these same certificates to connect to their downstream devices, too.
iot-edge How To Provision Devices At Scale Linux On Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-symmetric.md
# Create and provision IoT Edge for Linux on Windows devices at scale using symmetric keys This article provides end-to-end instructions for autoprovisioning one or more [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) devices using symmetric keys. You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml) (DPS). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before continuing.
iot-edge How To Provision Devices At Scale Linux On Windows Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-tpm.md
# Create and provision an IoT Edge for Linux on Windows device at scale by using a TPM This article provides instructions for autoprovisioning an Azure IoT Edge for Linux on Windows device by using a Trusted Platform Module (TPM). You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before you continue.
Simulated TPM samples:
Provision-EflowVM -provisioningType "DpsTpm" -scopeId "SCOPE_ID_HERE" ```
- If you have enrolled the device using a custom **Registration Id**, you must specify that Registration Id as well when provisioning:
+ If you have enrolled the device using a custom **Registration Id**, you must specify that registration ID as well when provisioning:
```powershell Provision-EflowVM -provisioningType "DpsTpm" -scopeId "SCOPE_ID_HERE" -registrationId "REGISTRATION_ID_HERE"
Simulated TPM samples:
Provision-EflowVM -provisioningType "DpsTpm" -scopeId "SCOPE_ID_HERE" ```
- If you have enrolled the device using a custom **Registration Id**, you must specify that Registration Id as well when provisioning:
+ If you have enrolled the device using a custom **Registration Id**, you must specify that registration ID as well when provisioning:
```powershell Provision-EflowVM -provisioningType "DpsTpm" -scopeId "SCOPE_ID_HERE" -registrationId "REGISTRATION_ID_HERE"
iot-edge How To Provision Devices At Scale Linux On Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-x509.md
# Create and provision IoT Edge for Linux on Windows devices at scale using X.509 certificates This article provides end-to-end instructions for autoprovisioning one or more [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) devices using X.509 certificates. You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml) (DPS). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before continuing.
iot-edge How To Provision Devices At Scale Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-symmetric.md
Title: Create and provision IoT Edge devices using symmetric keys on Linux - Azu
description: Use symmetric key attestation to test provisioning Linux devices at scale for Azure IoT Edge with device provisioning service Previously updated : 05/12/2022 Last updated : 08/26/2022 # Create and provision IoT Edge devices at scale on Linux using symmetric key This article provides end-to-end instructions for autoprovisioning one or more Linux IoT Edge devices using symmetric keys. You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml) (DPS). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before continuing.
Have the following information ready:
source = "dps" global_endpoint = "https://global.azure-devices-provisioning.net" id_scope = "PASTE_YOUR_SCOPE_ID_HERE"+
+ # Uncomment to send a custom payload during DPS registration
+ # payload = { uri = "PATH_TO_JSON_FILE" }
[provisioning.attestation] method = "symmetric_key"
Have the following information ready:
If you use any PKCS#11 URIs, find the **PKCS#11** section in the config file and provide information about your PKCS#11 configuration.
-1. Optionally, find the auto reprovisioning mode section of the file. Use the `auto_reprovisioning_mode` parameter to configure your device's reprovisioning behavior. **Dynamic** - Reprovision when the device detects that it may have been moved from one IoT Hub to another. This is the default. **AlwaysOnStartup** - Reprovision when the device is rebooted or a crash causes the daemon(s) to restart. **OnErrorOnly** - Never trigger device reprovisioning automatically. Each mode has an implicit device reprovisioning fallback if the device is unable to connect to IoT Hub during identity provisioning due to connectivity errors. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
+Optionally, find the auto reprovisioning mode section of the file. Use the `auto_reprovisioning_mode` parameter to configure your device's reprovisioning behavior. **Dynamic** - Reprovision when the device detects that it may have been moved from one IoT Hub to another. This is the default. **AlwaysOnStartup** - Reprovision when the device is rebooted or a crash causes the daemon(s) to restart. **OnErrorOnly** - Never trigger device reprovisioning automatically. Each mode has an implicit device reprovisioning fallback if the device is unable to connect to IoT Hub during identity provisioning due to connectivity errors. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).
-1. Save and close the config.toml file.
-1. Apply the configuration changes that you made to IoT Edge.
+<!-- iotedge-1.4 -->
- ```bash
- sudo iotedge config apply
- ```
+Optionally, uncomment the `payload` parameter to specify the path to a local JSON file. The contents of the file will be [sent to DPS as additional data](../iot-dps/how-to-send-additional-data.md#iot-edge-support) when the device registers. This is useful for [custom allocation](../iot-dps/how-to-use-custom-allocation-policies.md). For example, if you want to allocate your devices based on an IoT Plug and Play model ID without human intervention.
++
+<!-- iotedge-2020-11 -->
+
+Save and close the file.
+
+Apply the configuration changes that you made to IoT Edge.
+
+```bash
+sudo iotedge config apply
+```
:::moniker-end <!-- end iotedge-2020-11 -->
iot-edge How To Provision Devices At Scale Linux Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-tpm.md
# Create and provision IoT Edge devices at scale with a TPM on Linux This article provides instructions for autoprovisioning an Azure IoT Edge for Linux device by using a Trusted Platform Module (TPM). You can automatically provision IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before you continue.
After the runtime is installed on your device, configure the device with the inf
source = "dps" global_endpoint = "https://global.azure-devices-provisioning.net" id_scope = "SCOPE_ID_HERE"+
+ # Uncomment to send a custom payload during DPS registration
+ # payload = { uri = "PATH_TO_JSON_FILE" }
[provisioning.attestation] method = "tpm"
After the runtime is installed on your device, configure the device with the inf
1. Update the values of `id_scope` and `registration_id` with your device provisioning service and device information. The `scope_id` value is the **ID Scope** from your device provisioning service instance's overview page. 1. Optionally, find the auto reprovisioning mode section of the file. Use the `auto_reprovisioning_mode` parameter to configure your device's reprovisioning behavior. **Dynamic** - Reprovision when the device detects that it may have been moved from one IoT Hub to another. This is the default. **AlwaysOnStartup** - Reprovision when the device is rebooted or a crash causes the daemon(s) to restart. **OnErrorOnly** - Never trigger device reprovisioning automatically. Each mode has an implicit device reprovisioning fallback if the device is unable to connect to IoT Hub during identity provisioning due to connectivity errors. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md).+
+<!-- iotedge-1.4 -->
+1. Optionally, uncomment the `payload` parameter to specify the path to a local JSON file. The contents of the file will be [sent to DPS as additional data](../iot-dps/how-to-send-additional-data.md#iot-edge-support) when the device registers. This is useful for [custom allocation](../iot-dps/how-to-use-custom-allocation-policies.md). For example, if you want to allocate your devices based on an IoT Plug and Play model ID without human intervention.
+<!-- iotedge-2020-11 -->
1. Save and close the file. :::moniker-end
iot-edge How To Provision Devices At Scale Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-x509.md
# Create and provision IoT Edge devices at scale on Linux using X.509 certificates This article provides end-to-end instructions for autoprovisioning one or more Linux IoT Edge devices using X.509 certificates. You can automatically provision Azure IoT Edge devices with the [Azure IoT Hub device provisioning service](../iot-dps/index.yml) (DPS). If you're unfamiliar with the process of autoprovisioning, review the [provisioning overview](../iot-dps/about-iot-dps.md#provisioning-process) before continuing.
Have the following information ready:
source = "dps" global_endpoint = "https://global.azure-devices-provisioning.net" id_scope = "SCOPE_ID_HERE"+
+ # Uncomment to send a custom payload during DPS registration
+ # payload = { uri = "PATH_TO_JSON_FILE" }
[provisioning.attestation] method = "x509"
Have the following information ready:
1. Optionally, find the auto reprovisioning mode section of the file. Use the `auto_reprovisioning_mode` parameter to configure your device's reprovisioning behavior. **Dynamic** - Reprovision when the device detects that it may have been moved from one IoT Hub to another. This is the default. **AlwaysOnStartup** - Reprovision when the device is rebooted or a crash causes the daemon(s) to restart. **OnErrorOnly** - Never trigger device reprovisioning automatically. Each mode has an implicit device reprovisioning fallback if the device is unable to connect to IoT Hub during identity provisioning due to connectivity errors. For more information, see [IoT Hub device reprovisioning concepts](../iot-dps/concepts-device-reprovision.md). +
+<!-- iotedge-1.4 -->
+1. Optionally, uncomment the `payload` parameter to specify the path to a local JSON file. The contents of the file will be [sent to DPS as additional data](../iot-dps/how-to-send-additional-data.md#iot-edge-support) when the device registers. This is useful for [custom allocation](../iot-dps/how-to-use-custom-allocation-policies.md). For example, if you want to allocate your devices based on an IoT Plug and Play model ID without human intervention.
+
+<!-- iotedge-2020-11 -->
1. Save and close the file. 1. Apply the configuration changes that you made to IoT Edge.
iot-edge How To Provision Single Device Linux On Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-on-windows-symmetric.md
# Create and provision an IoT Edge for Linux on Windows device using symmetric keys This article provides end-to-end instructions for registering and provisioning an IoT Edge for Linux on Windows device.
iot-edge How To Provision Single Device Linux On Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-on-windows-x509.md
# Create and provision an IoT Edge for Linux on Windows device using X.509 certificates This article provides end-to-end instructions for registering and provisioning an IoT Edge for Linux on Windows device.
iot-edge How To Provision Single Device Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-symmetric.md
# Create and provision an IoT Edge device on Linux using symmetric keys This article provides end-to-end instructions for registering and provisioning a Linux IoT Edge device, including installing IoT Edge.
iot-edge How To Provision Single Device Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-x509.md
# Create and provision an IoT Edge device on Linux using X.509 certificates This article provides end-to-end instructions for registering and provisioning a Linux IoT Edge device, including installing IoT Edge.
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
# Update IoT Edge As the IoT Edge service releases new versions, you'll want to update your IoT Edge devices for the latest features and security improvements. This article provides information about how to update your IoT Edge devices when a new version is available.
For information about IoT Edge for Linux on Windows updates, see [EFLOW Updates]
:::moniker range=">=iotedge-2020-11" >[!NOTE]
->Currently, there is not support for IoT Edge version 1.3 running on Windows devices.
+>Currently, there is no support for IoT Edge version 1.4 running on Windows devices.
> >To view the steps for updating IoT Edge for Linux on Windows, see [IoT Edge 1.1](?view=iotedge-2018-06&preserve-view=true&tabs=windows).
Some of the key differences between the latest release and version 1.1 and earli
* The import command cannot detect or modify access rules to a device's trusted platform module (TPM). If your device uses TPM attestation, you need to manually update the /etc/udev/rules.d/tpmaccess.rules file to give access to the aziottpm service. For more information, see [Give IoT Edge access to the TPM](how-to-auto-provision-simulated-device-linux.md?view=iotedge-2020-11&preserve-view=true#give-iot-edge-access-to-the-tpm). * The workload API in the latest version saves encrypted secrets in a new format. If you upgrade from an older version to latest version, the existing master encryption key is imported. The workload API can read secrets saved in the prior format using the imported encryption key. However, the workload API can't write encrypted secrets in the old format. Once a secret is re-encrypted by a module, it is saved in the new format. Secrets encrypted in the latest version are unreadable by the same module in version 1.1. If you persist encrypted data to a host-mounted folder or volume, always create a backup copy of the data *before* upgrading to retain the ability to downgrade if necessary. * For backward compatibility when connecting devices that do not support TLS 1.2, you can configure Edge Hub to still accept TLS 1.0 or 1.1 via the [SslProtocols environment variable](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md#edgehub).  Please note that support for [TLS 1.0 and 1.1 in IoT Hub is considered legacy](../iot-hub/iot-hub-tls-support.md) and may also be removed from Edge Hub in future releases.  To avoid future issues, use TLS 1.2 as the only TLS version when connecting to Edge Hub or IoT Hub.
-* The preview for the experimental MQTT broker in Edge Hub 1.2 has ended and is not included in Edge Hub 1.3. We are continuing to refine our plans for an MQTT broker based on feedback received. In the meantime, if you need a standards-compliant MQTT broker on IoT Edge, consider deploying an open-source broker like Mosquitto as an IoT Edge module.
+* The preview for the experimental MQTT broker in Edge Hub 1.2 has ended and is not included in Edge Hub 1.4. We are continuing to refine our plans for an MQTT broker based on feedback received. In the meantime, if you need a standards-compliant MQTT broker on IoT Edge, consider deploying an open-source broker like Mosquitto as an IoT Edge module.
* Starting with version 1.2, when a backing image is removed from a container, the container keeps running and it persists across restarts. In 1.1, when a backing image is removed, the container is immediately recreated and the backing image is updated. Before automating any update processes, validate that it works on test machines.
If you're using Windows containers or IoT Edge for Linux on Windows, this specia
# [Windows](#tab/windows)
-Currently, there is no support for IoT Edge version 1.3 running on Windows devices.
+Currently, there is no support for IoT Edge version 1.4 running on Windows devices.
iot-edge Iot Edge As Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-as-gateway.md
# How an IoT Edge device can be used as a gateway IoT Edge devices can operate as gateways, providing a connection between other devices on the network and IoT Hub.
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
# Understand how Azure IoT Edge uses certificates IoT Edge certificates are used by the modules and downstream IoT devices to verify the identity and legitimacy of the [IoT Edge hub](iot-edge-runtime.md#iot-edge-hub) runtime module. These verifications enable a TLS (transport layer security) secure connection between the runtime, the modules, and the IoT devices. Like IoT Hub itself, IoT Edge requires a secure and encrypted connection from IoT downstream (or leaf) devices and IoT Edge modules. To establish a secure TLS connection, the IoT Edge hub module presents a server certificate chain to connecting clients in order for them to verify its identity.
iot-edge Iot Edge For Linux On Windows Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-benefits.md
# Why use Azure IoT Edge for Linux on Windows? For organizations interested in running business logic and analytics on devices, Azure IoT Edge for Linux on Windows (EFLOW) enables the deployment of production Linux-based cloud-native workloads onto Windows devices. Connecting your devices to Microsoft Azure lets you quickly bring cloud intelligence to your business. At the same time, running workloads on devices allows you to respond quickly in instances with limited connectivity and reduce bandwidth costs.
iot-edge Iot Edge For Linux On Windows Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-security.md
# IoT Edge for Linux on Windows security Azure IoT Edge for Linux on Windows benefits from all the security offerings from running on a Windows Client/Server host and ensures all the extra components keep the same security premises. This article provides information about the different security premises that are enabled by default, and some of the optional premises the user may enable.
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
# Azure IoT Edge for Linux on Windows supported systems This article provides details about which systems are supported by IoT Edge for Linux on Windows, whether generally available or in preview.
iot-edge Iot Edge For Linux On Windows Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-updates.md
# Update IoT Edge for Linux on Windows As the IoT Edge for Linux on Windows (EFLOW) application releases new versions, you'll want to update your IoT Edge devices for the latest features and security improvements. This article provides information about how to update your IoT Edge for Linux on Windows devices when a new version is available.
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows.md
# What is Azure IoT Edge for Linux on Windows Azure IoT Edge for Linux on Windows (EFLOW) allows you to run containerized Linux workloads alongside Windows applications in Windows deployments. Businesses that rely on Windows to power their edge devices and solutions can now take advantage of the cloud-native analytics solutions being built in Linux.
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
# Understand Azure IoT Edge limits and restrictions This article explains the limits and restrictions when using IoT Edge.
iot-edge Iot Edge Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-modules.md
Azure IoT Edge lets you deploy and manage business logic on the edge in the form
* A **module image** is a package containing the software that defines a module. * A **module instance** is the specific unit of computation running the module image on an IoT Edge device. The module instance is started by the IoT Edge runtime.
-* A **module identity** is a piece of information (including security credentials) stored in IoT Hub, that is associated to each module instance.
-* A **module twin** is a JSON document stored in IoT Hub, that contains state information for a module instance, including metadata, configurations, and conditions.
+* A **module identity** is a piece of information (including security credentials) stored in IoT Hub that is associated to each module instance.
+* A **module twin** is a JSON document stored in IoT Hub that contains state information for a module instance, including metadata, configurations, and conditions.
## Module images and instances
await client.OpenAsync();
// Get the module twin Twin twin = await client.GetTwinAsync(); ```- ## Offline capabilities Azure IoT Edge modules can operate offline indefinitely after syncing with IoT Hub at least once. IoT Edge devices can also extend this offline capability to other IoT devices. For more information, see [Understand extended offline capabilities for IoT Edge devices, modules, and child devices](offline-capabilities.md).
iot-edge Iot Edge Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-runtime.md
# Understand the Azure IoT Edge runtime and its architecture The IoT Edge runtime is a collection of programs that turn a device into an IoT Edge device. Collectively, the IoT Edge runtime components enable IoT Edge devices to receive code to run at the edge and communicate the results.
iot-edge Iot Edge Security Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-security-manager.md
# Azure IoT Edge security manager The Azure IoT Edge security manager is a well-bounded security core for protecting the IoT Edge device and all its components by abstracting the secure silicon hardware. The security manager is the focal point for security hardening and provides technology integration point to original equipment manufacturers (OEM).
iot-edge Module Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-development.md
# Develop your own IoT Edge modules Azure IoT Edge modules can connect with other Azure services and contribute to your larger cloud data pipeline. This article describes how you can develop modules to communicate with the IoT Edge runtime and IoT Hub, and therefore the rest of the Azure cloud.
iot-edge Offline Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/offline-capabilities.md
# Understand extended offline capabilities for IoT Edge devices, modules, and child devices Azure IoT Edge supports extended offline operations on your IoT Edge devices, and enables offline operations on child devices too. As long as an IoT Edge device has had one opportunity to connect to IoT Hub, that device and any child devices can continue to function with intermittent or no internet connection.
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
# Prepare to deploy your IoT Edge solution in production When you're ready to take your IoT Edge solution from development into production, make sure that it's configured for ongoing performance.
Once your IoT Edge device connects, be sure to continue configuring the Upstream
* Reduce memory space used by the IoT Edge hub * Use correct module images in deployment manifests * Be mindful of twin size limits when using custom modules
+ * Configure how updates to modules are applied
### Be consistent with upstream protocol
If you deploy a large number of modules, you might exhaust this twin size limit.
- Store any configuration in the custom module twin, which has its own limit. - Store some configuration that points to a non-space-limited location (that is, to a blob store).
-## Container management
+### Configure how updates to modules are applied
+When a deployment is updated, Edge Agent receives the new configuration as a twin update. If the new configuration has new or updated module images, by default, Edge Agent sequentially processes each module:
+1. The updated image is downloaded
+1. The running module is stopped
+1. A new module instance is started
+1. The next module update is processed
+
+In some cases, for example when dependencies exist between modules, it may be desirable to first download all updated module images before restarting any running modules. This module update behavior can be configured by setting an IoT Edge Agent environment variable `ModuleUpdateMode` to string value `WaitForAllPulls`. For more information, see [IoT Edge Environment Variables](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md).
+
+```JSON
+"modulesContent": {
+ "$edgeAgent": {
+ "properties.desired": {
+ ...
+ "systemModules": {
+ "edgeAgent": {
+ "env": {
+ "ModuleUpdateMode": {
+ "value": "WaitForAllPulls"
+ }
+ ...
+```
+### Container management
* **Important** * Use tags to manage versions * Manage volumes * **Helpful** * Store runtime containers in your private registry
+ * Configure image garbage collection
### Use tags to manage versions
Next, be sure to update the image references in the deployment.template.json fil
`"image": "<registry name and server>/azureiotedge-hub:1.1",`
+### Configure image garbage collection
+Image garbage collection is a feature in IoT Edge v1.4 and later to automatically clean up Docker images that are no longer used by IoT Edge modules. It only deletes Docker images that were pulled by the IoT Edge runtime as part of a deployment. Deleting unused Docker images helps conserve disk space.
+
+The feature is implemented in IoT Edge's host component, the `aziot-edged` service and enabled by default. Cleanup is done every day at midnight (device local time) and removes unused Docker images that were last used seven days ago. The parameters to control cleanup behavior are set in the `config.toml` and explained later in this section. If parameters aren't specified in the configuration file, the default values are applied.
+
+For example, the following is the `config.toml` image garbage collection section using default values:
+
+```toml
+[image_garbage_collection]
+enabled = true
+cleanup_recurrence = "1d"
+image_age_cleanup_threshold = "7d"
+cleanup_time = "00:00"
+```
+The following table describes image garbage collection parameters. All parameters are **optional** and can be set individually to change the default settings.
+
+| Parameter | Description | Required | Default value |
+|--|-|-||
+| `enabled` | Enables the image garbage collection. You may choose to disable the feature by changing this setting to `false`. | Optional | true |
+| `cleanup_recurrence` | Controls the recurrence frequency of the cleanup task. Must be specified as a multiple of days and can't be less than one day. <br><br> For example: 1d, 2d, 6d, etc. | Optional | 1d |
+| `image_age_cleanup_threshold` | Defines the minimum age threshold of unused images before considering for cleanup and must be specified in days. You can specify as *0d* to clean up the images as soon as they're removed from the deployment. <br><br> Images are considered unused *after* they've been removed from the deployment. | Optional | 7d |
+| `cleanup_time` | Time of day, *in device local time*, when the cleanup task runs. Must be in 24-hour HH:MM format. | Optional | 00:00 |
+ ## Networking * **Helpful**
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart-linux.md
# Quickstart: Deploy your first IoT Edge module to a virtual Linux device Test out Azure IoT Edge in this quickstart by deploying containerized code to a virtual Linux IoT Edge device. IoT Edge allows you to remotely manage code on your devices so that you can send more of your workloads to the edge. For this quickstart, we recommend using an Azure virtual machine for your IoT Edge device, which allows you to quickly create a test machine and then delete it when you're finished.
Use the following CLI command to create your IoT Edge device based on the prebui
<!-- iotedge-2020-11 --> :::moniker range=">=iotedge-2020-11"
-Use the following CLI command to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.3) template.
+Use the following CLI command to create your IoT Edge device based on the prebuilt [iotedge-vm-deploy](https://github.com/Azure/iotedge-vm-deploy/tree/1.4) template.
* For bash or Cloud Shell users, copy the following command into a text editor, replace the placeholder text with your information, then copy into your bash or Cloud Shell window: ```azurecli-interactive az deployment group create \ --resource-group IoTEdgeResources \
- --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.3/edgeDeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.4/edgeDeploy.json" \
--parameters dnsLabelPrefix='<REPLACE_WITH_VM_NAME>' \ --parameters adminUsername='azureUser' \ --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name <REPLACE_WITH_HUB_NAME> -o tsv) \
Use the following CLI command to create your IoT Edge device based on the prebui
```azurecli az deployment group create ` --resource-group IoTEdgeResources `
- --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.3/edgeDeploy.json" `
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.4/edgeDeploy.json" `
--parameters dnsLabelPrefix='<REPLACE_WITH_VM_NAME>' ` --parameters adminUsername='azureUser' ` --parameters deviceConnectionString=$(az iot hub device-identity connection-string show --device-id myEdgeDevice --hub-name <REPLACE_WITH_HUB_NAME> -o tsv) `
In **IoT Edge Module Marketplace**, search for and select the `Simulated Tempera
Select **Runtime Settings** to open the settings for the edgeHub and edgeAgent modules. This settings section is where you can manage the runtime modules by adding environment variables or changing the create options.
-Update the **Image** field for both the edgeHub and edgeAgent modules to use the version tag 1.3. For example:
+Update the **Image** field for both the edgeHub and edgeAgent modules to use the version tag 1.4. For example:
-* `mcr.microsoft.com/azureiotedge-hub:1.3`
-* `mcr.microsoft.com/azureiotedge-agent:1.3`
+* `mcr.microsoft.com/azureiotedge-hub:1.4`
+* `mcr.microsoft.com/azureiotedge-agent:1.4`
Select **Save** to apply your changes to the runtime modules.
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart.md
# Quickstart: Deploy your first IoT Edge module to a Windows device Try out Azure IoT Edge in this quickstart by deploying containerized code to a Linux on Windows IoT Edge device. IoT Edge allows you to remotely manage code on your devices so that you can send more of your workloads to the edge. For this quickstart, we recommend using your own Windows Client device to see how easy it is to use Azure IoT Edge for Linux on Windows. If you wish to use Windows Server or an Azure VM to create your deployment, follow the steps in the how-to guide on [installing and provisioning Azure IoT Edge for Linux on a Windows device](how-to-provision-single-device-linux-on-windows-symmetric.md).
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
# PowerShell functions for IoT Edge for Linux on Windows Understand the PowerShell functions that deploy, provision, and get the status of your IoT Edge for Linux on Windows (EFLOW) virtual machine.
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
# Azure IoT Edge supported systems This article provides details about which systems and components are supported by IoT Edge, whether officially or in preview.
The following table lists the components included in each release starting with
| Release | aziot-edge | edgeHub<br>edgeAgent | aziot-identity-service | | - | - | -- | - |
+| **1.4** | 1.4.0 | 1.4.0 | 1.4.0 |
| **1.3** | 1.3.0 | 1.3.0 | 1.3.0 | | **1.2** | 1.2.0<br>1.2.1<br>1.2.3<br>1.2.4<br>1.2.5<br><br>1.2.7 | 1.2.0<br>1.2.1<br>1.2.3<br>1.2.4<br>1.2.5<br>1.2.6<br>1.2.7 | 1.2.0<br>1.2.1<br>1.2.3<br>1.2.4<br>1.2.5<br> |
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
# Common issues and resolutions for Azure IoT Edge Use this article to find steps to resolve common issues that you may experience when deploying IoT Edge solutions. If you need to learn how to find logs and errors from your IoT Edge device, see [Troubleshoot your IoT Edge device](troubleshoot.md).
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot.md
# Troubleshoot your IoT Edge device If you experience issues running Azure IoT Edge in your environment, use this article as a guide for troubleshooting and diagnostics.
iot-edge Tutorial Configure Est Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-configure-est-server.md
This tutorial walks you through hosting a test EST server and configuring an IoT
## Prerequisites * An existing IoT Edge device with the [latest Azure IoT Edge runtime](how-to-update-iot-edge.md) installed. If you need to create a test device, complete [Quickstart: Deploy your first IoT Edge module to a virtual Linux device](quickstart-linux.md).
-* Your IoT Edge device requires Azure IoT Edge runtime 1.2 or later for EST support. Azure IoT Edge runtime 1.3 is required for EST certificate renewal.
+* Your IoT Edge device requires Azure IoT Edge runtime 1.2 or later for EST support. Azure IoT Edge runtime 1.3 or later required for EST certificate renewal.
* IoT Hub Device Provisioning Service (DPS) linked to IoT Hub. For information on configuring DPS, see [Quickstart: Set up the IoT Hub Device Provisioning Service with the Azure portal](../iot-dps/quick-setup-auto-provision.md). ## What is Enrollment over Secure Transport?
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
Once your new solution loads in the Visual Studio Code window, take a moment to
### Set IoT Edge runtime version
-The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. Currently, the latest stable version is 1.3. If you're developing modules for devices running the 1.1 long-term support version or the earlier 1.0 version, update the IoT Edge runtime version in Visual Studio Code to match.
+The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. Currently, the latest stable version is 1.4. If you're developing modules for devices running the 1.1 long-term support version or the earlier 1.0 version, update the IoT Edge runtime version in Visual Studio Code to match.
1. Select **View** > **Command Palette**.
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge.md
To create a hierarchy of IoT Edge devices, you will need:
```azurecli az deployment group create \ --resource-group <REPLACE_WITH_YOUR_RESOURCE_GROUP> \
- --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.3/edgeDeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/iotedge-vm-deploy/1.4/edgeDeploy.json" \
--parameters dnsLabelPrefix='<REPLACE_WITH_UNIQUE_DNS_FOR_VIRTUAL_MACHINE>' \ --parameters adminUsername='azureuser' \ --parameters authenticationType='sshPublicKey' \
To create a hierarchy of IoT Edge devices, you will need:
The virtual machine uses SSH keys for authenticating users. If you are unfamiliar with creating and using SSH keys, you can follow [the instructions for SSH public-private key pairs for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md).
- IoT Edge version 1.3 is preinstalled with this ARM template, saving the need to manually install the assets on the virtual machines. If you are installing IoT Edge on your own devices, see [Install Azure IoT Edge for Linux](how-to-provision-single-device-linux-symmetric.md) or [Update IoT Edge](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).
+ IoT Edge version 1.4 is preinstalled with this ARM template, saving the need to manually install the assets on the virtual machines. If you are installing IoT Edge on your own devices, see [Install Azure IoT Edge for Linux](how-to-provision-single-device-linux-symmetric.md) or [Update IoT Edge](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).
A successful creation of a virtual machine using this ARM template will output your virtual machine's `SSH` handle and fully-qualified domain name (`FQDN`). You will use the SSH handle and either the FQDN or IP address of each virtual machine for configuration in later steps, so keep track of this information. A sample output is pictured below.
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
Title: IoT Edge version navigation and history - Azure IoT Edge
description: Discover what's new in IoT Edge with information about new features and capabilities in the latest releases. Previously updated : 06/13/2022 Last updated : 08/25/2022
Azure IoT Edge is governed by Microsoft's [Modern Lifecycle Policy](/lifecycle/p
The IoT Edge documentation on this site is available for two different versions of the product, so that you can choose the content that applies to your IoT Edge environment. Currently, the two supported versions are:
-* **IoT Edge 1.3** contains content for new features and capabilities that are in the latest stable release. This version of the documentation also contains content for the IoT Edge for Linux on Windows (EFLOW) continuous release version.
+* **IoT Edge 1.4 (LTS)** is the latest long-term support (LTS) version of IoT Edge and contains content for new features and capabilities that are in the latest stable release. The documentation for this version covers all features and capabilities from all previous versions through 1.3. This version of the documentation also contains content for the IoT Edge for Linux on Windows (EFLOW) continuous release version.
* **IoT Edge 1.1 (LTS)** is the first long-term support (LTS) version of IoT Edge. The documentation for this version covers all features and capabilities from all previous versions through 1.1. This version of the documentation also contains content for the IoT Edge for Linux on Windows long-term support version, which is based on IoT Edge 1.1 LTS. * This documentation version will be stable through the supported lifetime of version 1.1, and won't reflect new features released in later versions. IoT Edge 1.1 LTS will be supported until December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Date | Highlights | | | - | - | - |
+| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Stable | August 2022 | Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288)
| [1.3](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0) | Stable | June 2022 | Support for Red Hat Enterprise Linux 8 on AMD and Intel 64-bit architectures.<br>Edge Hub now enforces that inbound/outbound communication uses minimum TLS version 1.2 by default<br>Updated runtime modules (edgeAgent, edgeHub) based on .NET 6 | [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to latest release](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md). | [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | [Long-term support plan and supported systems updates](support.md) |
This table provides recent version history for IoT Edge package releases, and hi
### IoT Edge for Linux on Windows
-| Release notes and assets | Type | Date | Highlights |
-| | - | - | - |
-| [Continuous Release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.2.7.07022) | Stable | January 2022 | **Public Preview** |
-| [1.1](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2106.0) | Long-term support (LTS) | June 2021 | [Long-term support plan and supported systems updates](support.md) |
+
+| IoT Edge release | Available in EFLOW branch | Release date | Highlights |
+||--|--||
+| 1.4 | Continuous release (CR) <br> Long-term support (LTS) | TBA | |
+| 1.3 | Continuous release (CR) | TBA | |
+| 1.2 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.2.7.07022) | January 2022 | **Public Preview** |
+| 1.1 | [Long-term support (LTS)](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2106.0) | June 2021 | [Long-term support plan and supported systems updates](support.md) |
## Next steps
iot-hub Iot Hub Devguide File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-file-upload.md
IoT Hub imposes throttling limits on the number of file uploads that it can init
## Associate an Azure storage account with IoT Hub
-You must associate an Azure storage account and s blob container with your IoT hub to use file upload features. All file uploads from devices registered with your IoT hub will go to this container. To configure a storage account and blob container on your IoT hub, see [Configure file uploads with Azure portal](iot-hub-configure-file-upload.md), [Configure file uploads with Azure CLI](iot-hub-configure-file-upload-cli.md), or [Configure file uploads with PowerShell](iot-hub-configure-file-upload-powershell.md). You can also use the IoT Hub management APIs to configure file uploads programmatically.
+You must associate an Azure storage account and blob container with your IoT hub to use file upload features. All file uploads from devices registered with your IoT hub will go to this container. To configure a storage account and blob container on your IoT hub, see [Configure file uploads with Azure portal](iot-hub-configure-file-upload.md), [Configure file uploads with Azure CLI](iot-hub-configure-file-upload-cli.md), or [Configure file uploads with PowerShell](iot-hub-configure-file-upload-powershell.md). You can also use the IoT Hub management APIs to configure file uploads programmatically.
If you use the portal, you can create a storage account and container during configuration. Otherwise, to create a storage account, see [Create a storage account](../storage/common/storage-account-create.md) in the Azure storage documentation. Once you have a storage account, you can see how to create a blob container in the [Azure blob storage quickstarts](../storage/blobs/storage-quickstart-blobs-portal.md). By default, Azure IoT Hub uses key-based authentication to connect and authorize with Azure Storage. You can also configure user-assigned or system-assigned managed identities to authenticate Azure IoT Hub with Azure Storage. Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. To learn how to configure managed identities, see [Configure file upload with managed identities](iot-hub-managed-identity.md#configure-file-upload-with-managed-identities).
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-diagnostics.md
To configure alerts:
2. Create new alert rule
- 1. Configure alert condition
+ 1. Configure alert condition (Note: to avoid noisy alerts, we recommend configuring alerts with the Aggregation type set to Average, looking back on a 5 minute window of data, and with a threshold of 95%)
2. (Optional) Add action group for automated repair
load-balancer Manage Probes How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-probes-how-to.md
Previously updated : 03/02/2022 Last updated : 08/28/2022
In this example, you'll create a TCP health probe to monitor port 80.
| Protocol | Select **TCP**. | | Port | Enter the **TCP** port you wish to monitor. For this example, it's **port 80**. | | Interval | Enter an interval between probe checks. For this example, it's the default of **5**. |
- | Unhealthy threshold | Enter the threshold number for consecutive failures. For this example, it's the default of **2**. |
7. Select **Add**.
- :::image type="content" source="./media/manage-probes-how-to/add-tcp-probe.png" alt-text="Screenshot of TCP probe addition.":::
- ### Remove a TCP health probe In this example, you'll remove a TCP health probe.
In this example, you'll create an HTTP health probe.
| Port | Enter the **TCP** port you wish to monitor. For this example, it's **port 80**. | | Path | Enter a URI used for requesting health status. For this example, it's **/**. | | Interval | Enter an interval between probe checks. For this example, it's the default of **5**. |
- | Unhealthy threshold | Enter the threshold number for consecutive failures. For this example, it's the default of **2**. |
7. Select **Add**.
- :::image type="content" source="./media/manage-probes-how-to/add-http-probe.png" alt-text="Screenshot of HTTP probe addition.":::
- ### Remove an HTTP health probe In this example, you'll remove an HTTP health probe.
In this example, you'll create an HTTPS health probe.
| Port | Enter the **TCP** port you wish to monitor. For this example, it's **port 443**. | | Path | Enter a URI used for requesting health status. For this example, it's **/**. | | Interval | Enter an interval between probe checks. For this example, it's the default of **5**. |
- | Unhealthy threshold | Enter the threshold number for consecutive failures. For this example, it's the default of **2**. |
7. Select **Add**.
- :::image type="content" source="./media/manage-probes-how-to/add-https-probe.png" alt-text="Screenshot of HTTPS probe addition.":::
- ### Remove an HTTPS health probe In this example, you'll remove an HTTPS health probe.
logic-apps Logic Apps Enterprise Integration Create Integration Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-create-integration-account.md
Title: Create and manage integration accounts
description: Create and manage integration accounts for building B2B enterprise integration workflows in Azure Logic Apps with the Enterprise Integration Pack. ms.suite: integration---+ Last updated 08/23/2022
Last updated 08/23/2022
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-Before you can build business-to-business (B2B) and enterprise integration workflows using Azure Logic Apps, you need to create an *integration account* resource. This account is a scalable cloud-based container in Azure that simplifies how you store and manage B2B artifacts that you define and use in your workflows for B2B scenarios. Such artifacts include [trading partners](logic-apps-enterprise-integration-partners.md), [agreements](logic-apps-enterprise-integration-agreements.md), [maps](logic-apps-enterprise-integration-maps.md), [schemas](logic-apps-enterprise-integration-schemas.md), [certificates](logic-apps-enterprise-integration-certificates.md), and so on. You also need to have an integration account to electronically exchange B2B messages with other organizations. When other organizations use protocols and message formats different from your organization, you have to convert these formats so your organization's system can process those messages. Supported industry-standard protocols include [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), [EDIFACT](logic-apps-enterprise-integration-edifact.md), and [RosettaNet](logic-apps-enterprise-integration-rosettanet.md).
+Before you can build business-to-business (B2B) and enterprise integration workflows using Azure Logic Apps, you need to create an *integration account* resource. This account is a scalable cloud-based container in Azure that simplifies how you store and manage B2B artifacts that you define and use in your workflows for B2B scenarios, for example:
-> [!TIP]
-> To create an integration account for use with an [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md),
-> review [Create integration accounts in an ISE](add-artifacts-integration-service-environment-ise.md#create-integration-account-environment).
+* [Trading partners](logic-apps-enterprise-integration-partners.md)
+* [Agreements](logic-apps-enterprise-integration-agreements.md)
+* [Maps](logic-apps-enterprise-integration-maps.md)
+* [Schemas](logic-apps-enterprise-integration-schemas.md)
+* [Certificates](logic-apps-enterprise-integration-certificates.md)
+
+You also need an integration account to electronically exchange B2B messages with other organizations. When other organizations use protocols and message formats different from your organization, you have to convert these formats so your organization's system can process those messages. With Azure Logic Apps, you can build workflows that support the following industry-standard protocols:
+
+* [AS2](logic-apps-enterprise-integration-as2.md)
+* [EDIFACT](logic-apps-enterprise-integration-edifact.md)
+* [RosettaNet](logic-apps-enterprise-integration-rosettanet.md)
+* [X12](logic-apps-enterprise-integration-x12.md)
This article shows how to complete the following tasks: * Create an integration account.
-* Link an integration account to a logic app resource.
+* Link your integration account to a logic app resource.
* Change the pricing tier for your integration account.
-* Unlink an integration account from a logic app.
+* Unlink your integration account from a logic app resource.
* Move an integration account to another Azure resource group or subscription. * Delete an integration account.
-If you're new to Azure Logic Apps, review [What is Azure Logic Apps](logic-apps-overview.md)? For more information about B2B enterprise integration, review [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md).
+> [!NOTE]
+>
+> If you use an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md),
+> and you need to create an integration account to use with that ISE, review
+> [Create integration accounts in an ISE](add-artifacts-integration-service-environment-ise.md#create-integration-account-environment).
+
+If you're new to creating B2B enterprise integration workflows in Azure Logic Apps, review the following documentation:
+
+* [What is Azure Logic Apps](logic-apps-overview.md)
+* [B2B enterprise integration workflows with Azure Logic Apps and Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md)
## Prerequisites
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You have to use the same Azure subscription for both your integration account and logic app resource.
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Make sure that you use the same Azure subscription for both your integration account and logic app resource.
+
+* Whether you're working on a Consumption or Standard logic app workflow, your logic app resource must already exist before you can link your integration account.
+
+ * For Consumption logic app resources, this link is required before you can use the artifacts from your integration account with your workflow. Although you can create your artifacts without this link, the link is required when you're ready to use these artifacts.
+
+ * For Standard logic app resources, this link is optional, based on your scenario:
+
+ * If you have an integration account with the artifacts that you need or want to use, you can link the integration account to each Standard logic app resource where you want to use the artifacts.
+
+ * Some Azure-hosted integration account connectors, such as **AS2**, **EDIFACT**, and **X12**, let you create a connection to your integration account. If you're just using these connectors, you don't need the link.
+
+ * The built-in connectors named **Liquid** and **Flat File** let you select maps and schemas that you previously uploaded to your logic app resource or to a linked integration account.
+
+ If you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option, which also means you don't have to upload maps and schemas to each logic app resource. Either way, you can use these artifacts across all child workflows within the *same logic app resource*.
-* If you're using the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you need to have a logic app resource that you can [link to your integration account](#link-account). This link is required before you can use your artifacts in your workflow. You can create your artifacts without this link, but the link is required when you're ready to use these artifacts in your workflows.
+* Basic knowledge about how to create logic app workflows. For more information, review the following documentation:
-* If you're using the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-type-and-host-environment-differences), you can directly add maps and schemas to your logic app resource using either the Azure portal or Visual Studio Code. You can then use these artifacts across multiple workflows within the *same logic app resource*. You still have to create an integration account for your other B2B artifacts and to use B2B operations, such as [AS2](logic-apps-enterprise-integration-as2.md), [X12](logic-apps-enterprise-integration-x12.md), [EDIFACT](logic-apps-enterprise-integration-edifact.md), and [RosettaNet](logic-apps-enterprise-integration-rosettanet.md) operations. However, you don't need to link your integration account to your logic app resource, so the linking capability doesn't exist.
+ * [Quickstart: Create your first Consumption logic app workflow](quickstart-create-first-logic-app-workflow.md)
+
+ * [Create a Standard logic app workflow with single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
## Create integration account
-Integration accounts are available in different tiers that [vary in pricing](https://azure.microsoft.com/pricing/details/logic-apps/). Based on the tier you choose, creating an integration account might incur costs. For more information, review [Logic Apps pricing and billing models](logic-apps-pricing.md#integration-accounts) and [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
+Integration accounts are available in different tiers that [vary in pricing](https://azure.microsoft.com/pricing/details/logic-apps/). Based on the tier you choose, creating an integration account might incur costs. For more information, review [Azure Logic Apps pricing and billing models](logic-apps-pricing.md#integration-accounts) and [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
-Based on your requirements and scenarios, determine the appropriate integration account tier to create. Both your integration account and logic app resource must use the *same* location or Azure region. The following table describes the available tiers:
+Based on your requirements and scenarios, determine the appropriate integration account tier to create. The following table describes the available tiers:
| Tier | Description | ||-|
-| **Basic** | For scenarios where you want only message handling or to act as a small business partner that has a trading partner relationship with a larger business entity. <p><p>Supported by the Logic Apps SLA. |
-| **Standard** | For scenarios where you have more complex B2B relationships and increased numbers of entities that you must manage. <p><p>Supported by the Logic Apps SLA. |
-| **Free** | For exploratory scenarios, not production scenarios. This tier has limits on region availability, throughput, and usage. For example, the Free tier is available only for public regions in Azure, for example, West US or Southeast Asia, but not for [Azure China 21Vianet](/azure/chin). <p><p>**Note**: Not supported by the Logic Apps SLA. |
+| **Basic** | For scenarios where you want only message handling or to act as a small business partner that has a trading partner relationship with a larger business entity. <br><br>Supported by the Azure Logic Apps SLA. |
+| **Standard** | For scenarios where you have more complex B2B relationships and increased numbers of entities that you must manage. <br><br>Supported by the Azure Logic Apps SLA. |
+| **Free** | For exploratory scenarios, not production scenarios. This tier has limits on region availability, throughput, and usage. For example, the Free tier is available only for public regions in Azure, for example, West US or Southeast Asia, but not for [Azure China 21Vianet](/azure/chin). <br><br>**Note**: Not supported by the Azure Logic Apps SLA. |
||| For this task, you can use the Azure portal, [Azure CLI](/cli/azure/resource#az-resource-create), or [Azure PowerShell](/powershell/module/Az.LogicApp/New-AzIntegrationAccount).
+> [!IMPORTANT]
+>
+> For you to successfully link and use your integration account with your logic app,
+> make sure that both resources exist in the *same* Azure subscription and Azure region.
+ ### [Portal](#tab/azure-portal)
-1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account credentials.
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
-1. In the main Azure search box, enter `integration accounts`, and select **Integration accounts**.
+1. In the Azure portal search box, enter **integration accounts**, and select **Integration accounts**.
1. Under **Integration accounts**, select **Create**.
For this task, you can use the Azure portal, [Azure CLI](/cli/azure/resource#az-
| Property | Required | Value | Description | |-|-|-|-| | **Subscription** | Yes | <*Azure-subscription-name*> | The name for your Azure subscription |
- | **Resource group** | Yes | <*Azure-resource-group-name*> | The name for the [Azure resource group](../azure-resource-manager/management/overview.md) to use for organizing related resources. For this example, create a new resource group named `FabrikamIntegration-RG`. |
- | **Integration account name** | Yes | <*integration-account-name*> | Your integration account's name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). This example uses `Fabrikam-Integration`. |
- | **Region** | Yes | <*Azure-region*> | The Azure region where to store your integration account metadata. Either select the same location as your logic app, or create your logic apps in the same location as your integration account. For this example, use `West US`. <p>**Note**: To create an integration account inside an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), select **Associate with integration service environment** and select your ISE as the location. For more information, see [Create integration accounts in an ISE](add-artifacts-integration-service-environment-ise.md#create-integration-account-environment). |
- | **Pricing Tier** | Yes | <*pricing-level*> | The pricing tier for the integration account, which you can change later. For this example, select **Free**. For more information, review the following documentation: <p>- [Logic Apps pricing model](logic-apps-pricing.md#integration-accounts) <br>- [Logic Apps limits and configuration](logic-apps-limits-and-config.md#integration-account-limits) <br>- [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) |
+ | **Resource group** | Yes | <*Azure-resource-group-name*> | The name for the [Azure resource group](../azure-resource-manager/management/overview.md) to use for organizing related resources. For this example, create a new resource group named **FabrikamIntegration-RG**. |
+ | **Integration account name** | Yes | <*integration-account-name*> | Your integration account's name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). This example uses **Fabrikam-Integration**. |
+ | **Region** | Yes | <*Azure-region*> | The Azure region where to store your integration account metadata. Either select the same location as your logic app resource, or create your logic apps in the same location as your integration account. For this example, use **West US**. <br><br>**Note**: To create an integration account inside an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), select **Associate with integration service environment** and select your ISE as the location. For more information, see [Create integration accounts in an ISE](add-artifacts-integration-service-environment-ise.md#create-integration-account-environment). |
+ | **Pricing Tier** | Yes | <*pricing-level*> | The pricing tier for the integration account, which you can change later. For this example, select **Free**. For more information, review the following documentation: <br><br>- [Logic Apps pricing model](logic-apps-pricing.md#integration-accounts) <br>- [Logic Apps limits and configuration](logic-apps-limits-and-config.md#integration-account-limits) <br>- [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) |
| **Enable log analytics** | No | Unselected | For this example, don't select this option. | |||||
-1. When you're finished, select **Review + create**.
+1. When you're done, select **Review + create**.
After deployment completes, Azure opens your integration account.
For this task, you can use the Azure portal, [Azure CLI](/cli/azure/resource#az-
For more information about pricing, see these resources:
- * [Logic Apps pricing model](logic-apps-pricing.md#integration-accounts)
- * [Logic Apps limits and configuration](logic-apps-limits-and-config.md#integration-account-limits)
- * [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
+ * [Azure Logic Apps pricing model](logic-apps-pricing.md#integration-accounts)
+ * [Azure Logic Apps limits and configuration](logic-apps-limits-and-config.md#integration-account-limits)
+ * [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
To import an integration account by using a JSON file, use the [az logic integration-account import](/cli/azure/logic/integration-account#az-logic-integration-account-import) command:
az logic integration-account import --name integration_account_01 \
<a name="link-account"></a>
-## Link to logic app (Consumption resource only)
+## Link to logic app
+
+For you to successfully link your integration account to your logic app resource, make sure that both resources use the *same* Azure subscription and Azure region.
+
+### [Consumption](#tab/consumption)
-For your **Logic App (Consumption)** workflow to access the B2B artifacts in your integration account, you must first link your logic app resource to your integration account. Both logic app and integration account must use the same Azure subscription and Azure region. To complete this task, you can use the Azure portal. If you use Visual Studio and your logic app is in an [Azure Resource Group project](../azure-resource-manager/templates/create-visual-studio-deployment-project.md), you can [link your logic app to an integration account by using Visual Studio](manage-logic-apps-with-visual-studio.md#link-integration-account).
+This section describes how to complete this task using the Azure portal. If you use Visual Studio and your logic app is in an [Azure Resource Group project](../azure-resource-manager/templates/create-visual-studio-deployment-project.md), you can [link your logic app to an integration account by using Visual Studio](manage-logic-apps-with-visual-studio.md#link-integration-account).
-1. In the [Azure portal](https://portal.azure.com), open an existing logic app, or create a new logic app.
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource.
-1. On your logic app's menu, under **Settings**, select **Workflow settings**. Under **Integration account**, open the **Select an Integration account** list, and select the integration account you want.
+1. On your logic app's navigation menu, under **Settings**, select **Workflow settings**. Under **Integration account**, open the **Select an Integration account** list, and select the integration account you want.
![Screenshot that shows the Azure portal with integration account menu with "Workflow settings" pane open and "Select an Integration account" list open.](./media/logic-apps-enterprise-integration-create-integration-account/select-integration-account.png)
For your **Logic App (Consumption)** workflow to access the B2B artifacts in you
![Screenshot that shows Azure confirmation message.](./media/logic-apps-enterprise-integration-create-integration-account/link-confirmation.png)
-Now your logic app can use the artifacts in your integration account plus the B2B connectors, such as XML validation and flat file encoding or decoding.
+Now your logic app workflow can use the artifacts in your integration account plus the B2B connectors, such as XML validation and flat file encoding or decoding.
+
+### [Standard](#tab/standard)
+
+#### Find your integration account's callback URL
+
+Before you can link your integration account to a Standard logic app resource, you need to have your integration account's **callback URL**.
+
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
+
+1. In the Azure portal search box, find and select your integration account. To browse existing accounts, enter **integration accounts**, and then select **Integration accounts**.
+
+1. From the **Integration accounts** list, select your integration account.
+
+1. On your selected integration account's navigation menu, under **Settings**, select **Callback URL**.
+
+1. Find the **Generated Callback URL** property value, copy the value, and save the URL to use later for linking.
+
+#### Link your integration account to your Standard logic app resource
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
+
+1. On your logic app's navigation menu, under **Settings**, select **Configuration**.
+
+1. On the **Configuration** pane, check whether the app setting named **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** exists.
+
+1. If the app setting doesn't exist, under the **Configuration** pane toolbar, select **New application setting**.
+
+1. Provide the following values for the app setting:
+
+ | Property | Value |
+ |-|-|
+ | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** |
+ | **Value** | <*integration-account-callback-URL*> |
+ |||
+
+1. When you're done, select **OK**. When you return to the **Configuration** pane, make sure to save your changes. On the **Configuration** pane toolbar, select **Save**.
++ <a name="change-pricing-tier"></a>
Now your logic app can use the artifacts in your integration account plus the B2
To increase the [limits](logic-apps-limits-and-config.md#integration-account-limits) for an integration account, you can [upgrade to a higher pricing tier](#upgrade-pricing-tier), if available. For example, you can upgrade from the Free tier to the Basic tier or Standard tier. You can also [downgrade to a lower tier](#downgrade-pricing-tier), if available. For more information pricing information, review the following documentation:
-* [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
-* [Logic Apps pricing model](logic-apps-pricing.md#integration-accounts)
+* [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/)
+* [Azure Logic Apps pricing model](logic-apps-pricing.md#integration-accounts)
<a name="upgrade-pricing-tier"></a>
To make this change, use the [Azure CLI](/cli/azure/get-started-with-azure-cli).
## Unlink from logic app
+### [Consumption](#tab/consumption)
+ If you want to link your logic app to another integration account, or no longer use an integration account with your logic app, delete the link by using Azure Resource Explorer. 1. Open your browser window, and go to [Azure Resource Explorer (https://resources.azure.com)](https://resources.azure.com). Sign in with the same Azure account credentials.
If you want to link your logic app to another integration account, or no longer
"name": "<integration-account-name>", "id": "<integration-account-resource-ID>", "type": "Microsoft.Logic/integrationAccounts"
- },
+ },
+ }
``` For example:
If you want to link your logic app to another integration account, or no longer
![Screenshot that shows the Azure portal with the logic app menu and "Workflow settings" selected.](./media/logic-apps-enterprise-integration-create-integration-account/unlinked-account.png)
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
+
+1. On your logic app's navigation menu, under **Settings**, select **Configuration**.
+
+1. On the **Configuration** pane, find the app setting named **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL**.
+
+1. In the **Delete** column, select **Delete** (trash can icon).
+
+1. On the **Configuration** pane toolbar, select **Save**.
+++ ## Move integration account You can move your integration account to another Azure resource group or Azure subscription. When you move resources, Azure creates new resource IDs, so make sure that you use the new IDs instead and update any scripts or tools associated with the moved resources. If you want to change the subscription, you must also specify an existing or new resource group.
For this task, you can use either the Azure portal by following the steps in thi
1. To acknowledge your understanding that any scripts or tools associated with the moved resources won't work until you update them with the new resource IDs, select the confirmation box, and then select **OK**.
-1. After you finish, make sure that you update any and all scripts with the new resource IDs for your moved resources.
+1. After you finish, make sure that you update all scripts with the new resource IDs for your moved resources.
## Delete integration account
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
Title: Convert JSON and XML with Liquid templates
-description: Transform JSON and XML by using Liquid templates as maps in Azure Logic Apps.
+description: Transform JSON and XML by using Liquid templates as maps in workflows using Azure Logic Apps.
ms.suite: integration---+ Previously updated : 07/25/2021 Last updated : 08/15/2022 # Customer intent: As a developer, I want to convert JSON and XML by using Liquid templates as maps in Azure Logic Apps
-# Transform JSON and XML using Liquid templates as maps in Azure Logic Apps
+# Transform JSON and XML using Liquid templates as maps in workflows using Azure Logic Apps
-When you want to perform basic JSON transformations in your logic apps, you can use native [data operations](../logic-apps/logic-apps-perform-data-operations.md) such as **Compose** or **Parse JSON**. For advanced and complex JSON to JSON transformations that have elements such as iterations, control flows, and variables, create and use templates that describe these transformations by using the [Liquid](https://shopify.github.io/liquid/) open-source template language. You can also [perform other transformations](#other-transformations), for example, JSON to text, XML to JSON, and XML to text.
+When you want to perform basic JSON transformations in your logic app workflows, you can use built-in data operations, such as the **Compose** action or **Parse JSON** action. However, some scenarios might require advanced and complex transformations that include elements such as iterations, control flows, and variables. For transformations between JSON to JSON, JSON to text, XML to JSON, or XML to text, you can create a template that describes the required mapping or transformation using the Liquid open-source template language. You can select this template when you add a **Liquid** built-in action to your workflow. You can use **Liquid** actions in multi-tenant Consumption logic app workflows and single-tenant Standard logic app workflows.
-Before you can perform a Liquid transformation in your logic app, you must first create a Liquid template that defines the mapping that you want. You then [upload the template as a map](../logic-apps/logic-apps-enterprise-integration-maps.md) into your [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md). When you add the **Transform JSON to JSON - Liquid** action to your logic app, you can then select the Liquid template as the map for the action to use.
+Although no **Liquid** triggers are available, you can use any trigger or action to get or feed the source JSON or XML content into your workflow for transformation. For example, you can use a built-in connector trigger, a managed or Azure-hosted connector trigger available for Azure Logic Apps, or even another app.
-This article shows you how to complete these tasks:
+This article shows how to complete the following tasks:
* Create a Liquid template.
-* Add the template to your integration account.
-* Add the Liquid transform action to your logic app.
+* Upload the template to your integration account for Consumption logic app workflows or to your Standard logic app resource for use in any child workflow.
+* Add a Liquid action to your workflow.
* Select the template as the map that you want to use.
+For more information, review the following documentation:
+
+* [Perform data operations in Azure Logic Apps](logic-apps-perform-data-operations.md)
+* [Liquid open-source template language](https://shopify.github.io/liquid/)
+* [Consumption versus Standard logic apps](logic-apps-overview.md#resource-type-and-host-environment-differences)
+* [Integration account built-in connectors](../connectors/built-in.md#integration-account-built-in)
+* [Built-in connectors overview for Azure Logic Apps](../connectors/built-in.md)
+* [Managed or Azure-hosted connectors overview for Azure Logic Apps](../connectors/managed.md) and [Managed or Azure-hosted connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+ ## Prerequisites
-* An Azure subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* Your logic app resource and workflow. Liquid operations don't have any triggers available, so your workflow has to minimally include a trigger. For more information, review the following documentation:
+
+ * [Quickstart: Create your first Consumption logic app workflow with multi-tenant Azure Logic Apps](quickstart-create-first-logic-app-workflow.md)
+
+ * [Create a Standard logic app workflow with single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
+
+* Based on whether you're working on a Consumption or Standard logic app workflow, you'll need an [integration account resource](logic-apps-enterprise-integration-create-integration-account.md). Usually, you need this resource when you want to define and store artifacts for use in enterprise integration and B2B workflows.
+
+ > [!IMPORTANT]
+ >
+ > To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
+
+ * If you're working on a Consumption logic app workflow, your integration account requires a [link to your logic app resource](logic-apps-enterprise-integration-create-integration-account.md?tabs=consumption#link-account).
-* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+ * If you're working on a Standard logic app workflow, you can link your integration account to your logic app resource, upload maps directly to your logic app resource, or both, based on the following scenarios:
-* An [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md)
+ * If you already have an integration account with the artifacts that you need or want to use, you can link the integration account to multiple Standard logic app resources where you want to use the artifacts. That way, you don't have to upload maps to each individual logic app. For more information, review [Link your logic app resource to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=standard#link-account).
+
+ * Some Azure-hosted integration account connectors, such as **AS2**, **EDIFACT**, and **X12**, let you create a connection to your integration account. If you're just using these connectors, you don't need the link.
+
+ * The built-in connectors named **Liquid** and **Flat File** let you select maps and schemas that you previously uploaded to your logic app resource or to a linked integration account, but not both. You can then use these artifacts across all child workflows within the *same logic app resource*.
+
+ So, if you don't have or need an integration account, you can use the upload option. Otherwise, you can use the linking option. Either way, you can use these artifacts across all child workflows within the same logic app resource.
* Basic knowledge about [Liquid template language](https://shopify.github.io/liquid/). Azure Logic Apps uses DotLiquid 2.0.361. > [!NOTE]
- > The **Transform JSON to JSON - Liquid** action follows the [DotLiquid implementation for Liquid](https://github.com/dotliquid/dotliquid),
+ >
+ > The Liquid action named **Transform JSON to JSON** follows the [DotLiquid implementation for Liquid](https://github.com/dotliquid/dotliquid),
> which differs in specific cases from the [Shopify implementation for Liquid](https://shopify.github.io/liquid). > For more information, see [Liquid template considerations](#liquid-template-considerations).
-## Create the template
+<a name="create-template"></a>
+
+## Step 1 - Create the template
+
+Before you can perform a Liquid transformation in your logic app workflow, you must first create a Liquid template that defines the mapping that you want.
1. Create the Liquid template that you use as a map for the JSON transformation. You can use any editing tool that you want.
- For this example, create the sample Liquid template as described in this section:
+ The JSON to JSON transformation example in this article uses the following sample Liquid template:
- ```json
- {%- assign deviceList = content.devices | Split: ', ' -%}
+ ```
+ { %- assign deviceList = content.devices | Split: ', ' -% }
{ "fullName": "{{content.firstName | Append: ' ' | Append: content.lastName}}",
This article shows you how to complete these tasks:
} ```
-1. Save the template by using the `.liquid` extension. This example uses `SimpleJsonToJsonTemplate.liquid`.
+1. Save the template using the Liquid template (**.liquid**) file extension. This example uses **SimpleJsonToJsonTemplate.liquid**.
-## Upload the template
+<a name="upload-template"></a>
-1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account credentials.
+## Step 2 - Upload Liquid template
-1. In the Azure portal search box, enter `integration accounts`, and select **Integration accounts**.
+After you create your Liquid template, you now have to upload the template based on the following scenario:
- ![Find "Integration accounts"](./media/logic-apps-enterprise-integration-liquid-transform/find-integration-accounts.png)
+* If you're working on a Consumption logic app workflow, [upload your template to your integration account](#upload-template-integration-account).
+
+* If you're working on a Standard logic app workflow, you can [upload your template to your integration account](#upload-template-integration-account), or [upload your template to your logic app resource](#upload-template-standard-logic-app).
+
+<a name="upload-template-integration-account"></a>
+
+### Upload template to integration account
+
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
+
+1. In the Azure portal search box, enter **integration accounts**, and select **Integration accounts**.
+
+ ![Screenshot showing the Azure portal search box with "integration accounts" entered and "Integration accounts"selected.](./media/logic-apps-enterprise-integration-liquid-transform/find-integration-accounts.png)
1. Find and select your integration account.
- ![Select integration account](./media/logic-apps-enterprise-integration-liquid-transform/select-integration-account.png)
+ ![Screenshot showing integration accounts pane with integration account selected.](./media/logic-apps-enterprise-integration-liquid-transform/select-integration-account.png)
-1. On the **Overview** pane, under **Components**, select **Maps**.
+1. On the integration account's navigation menu, under **Settings**, select **Maps**.
- ![Select "Maps" tile](./media/logic-apps-enterprise-integration-liquid-transform/select-maps-tile.png)
+ ![Screenshot showing integration account navigation menu with "Maps" selected.](./media/logic-apps-enterprise-integration-liquid-transform/select-maps.png)
-1. On the **Maps** pane, select **Add** and provide these details for your map:
+1. On the **Maps** pane, select **Add**. Provide the following information about your map:
| Property | Value | Description | |-|-|-| | **Name** | `JsonToJsonTemplate` | The name for your map, which is "JsonToJsonTemplate" in this example |
- | **Map type** | **liquid** | The type for your map. For JSON to JSON transformation, you must select **liquid**. |
+ | **Map type** | **Liquid** | The type for your map. For JSON to JSON transformation, you must select **Liquid**. |
| **Map** | `SimpleJsonToJsonTemplate.liquid` | An existing Liquid template or map file to use for transformation, which is "SimpleJsonToJsonTemplate.liquid" in this example. To find this file, you can use the file picker. For map size limits, see [Limits and configuration](../logic-apps/logic-apps-limits-and-config.md#artifact-capacity-limits). | |||
- ![Add Liquid template](./media/logic-apps-enterprise-integration-liquid-transform/add-liquid-template.png)
+ ![Screenshot showing "Add Map" pane with new template uploaded.](./media/logic-apps-enterprise-integration-liquid-transform/add-liquid-template.png)
-## Add the Liquid transformation action
+<a name="upload-template-standard-logic-app"></a>
-1. In the Azure portal, follow these steps to [create a blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+### Upload template to Standard logic app
-1. In the Logic App Designer, add the [Request trigger](../connectors/connectors-native-reqres.md#add-request-trigger) to your logic app.
+1. In the [Azure portal](https://portal.azure.com), find and open your logic app resource. Make sure that you're at the resource level, not the workflow level.
-1. Under the trigger, choose **New step**. In the search box, enter `liquid` as your filter, and select this action: **Transform JSON to JSON - Liquid**
+1. On your logic app resource's navigation menu, under **Artifacts**, select **Maps**.
- ![Find and select Liquid action](./media/logic-apps-enterprise-integration-liquid-transform/search-action-liquid.png)
+1. On the **Maps** pane toolbar, select **Add**.
-1. Open the **Map** list, and select your Liquid template, which is "JsonToJsonTemplate" in this example.
+1. On the **Add Map** pane, provide the following information about your template:
- ![Select map](./media/logic-apps-enterprise-integration-liquid-transform/select-map.png)
+ | Property | Value | Description |
+ |-|-|-|
+ | **Name** | `JsonToJsonTemplate` | The name for your map, which is "JsonToJsonTemplate" in this example |
+ | **Map type** | **Liquid** | The type for your map. For JSON to JSON transformation, you must select **Liquid**. |
+ | **Map** | `SimpleJsonToJsonTemplate.liquid` | An existing Liquid template or map file to use for transformation, which is "SimpleJsonToJsonTemplate.liquid" in this example. To find this file, you can use the file picker. For map size limits, see [Limits and configuration](../logic-apps/logic-apps-limits-and-config.md#artifact-capacity-limits). |
+ |||
- If the maps list is empty, most likely your logic app isn't linked to your integration account.
- To link your logic app to the integration account that has the Liquid template or map, follow these steps:
+1. When you're done, select **OK**.
- 1. On your logic app menu, select **Workflow settings**.
+ After your map file finishes uploading, the map appears in the **Maps** list. On your integration account's **Overview** page, under **Artifacts**, your uploaded map also appears.
- 1. From the **Select an Integration account** list, select your integration account, and select **Save**.
+## Step 3 - Add the Liquid transformation action
- ![Link logic app to integration account](./media/logic-apps-enterprise-integration-liquid-transform/link-integration-account.png)
+The following steps show how to add a Liquid transformation action for Consumption and Standard logic app workflows.
-1. Now add the **Content** property to this action. Open the **Add new parameter** list, and select **Content**.
+### [Consumption](#tab/consumption)
- ![Add "Content" property to action](./media/logic-apps-enterprise-integration-liquid-transform/add-content-property-to-action.png)
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer, if not already open.
-1. To set the **Content** property value, click inside the **Content** box so that the dynamic content list appears. Select the **Body** token, which represents the body content output from the trigger.
+1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Liquid operations don't have any triggers available.
- ![Select "Body" token for "Content" property value](./media/logic-apps-enterprise-integration-liquid-transform/select-body.png)
+ This example continues with the Request trigger named **When a HTTP request is received**.
- When you're done, the action looks like this example:
+1. On the workflow designer, under the step where you want to add the Liquid action, select **New step**.
- ![Finished "Transform JSON to JSON" action](./media/logic-apps-enterprise-integration-liquid-transform/finished-transform-action.png)
+1. Under the **Choose an operation** search box, select **All**. In the search box, enter **liquid**.
-## Test your logic app
+1. From the actions list, select the Liquid action that you want to use.
-1. By using [Postman](https://www.getpostman.com/postman) or a similar tool and the `POST` method, send a call to the Request trigger's URL and include the JSON input to transform, for example:
+ This example continues using the action named **Transform JSON to JSON**.
- ```json
- {
- "devices": "Surface, Windows Phone, Desktop computer, Monitors",
- "firstName": "Dean",
- "lastName": "Ledet",
- "phone": "(111)5551111"
- }
- ```
+ ![Screenshot showing Consumption workflow designer with a Liquid action selected.](./media/logic-apps-enterprise-integration-liquid-transform/select-liquid-action-consumption.png)
-1. After your workflow finishes running, go to the workflow's run history, and examine the **Transform JSON to JSON** action's inputs and outputs, for example:
+1. In the action's **Content** property, provide the JSON output from the trigger or a previous action that you want to transform by following these steps.
- ![Example output](./media/logic-apps-enterprise-integration-liquid-transform/example-output-jsontojson.png)
+ 1. Click inside the **Content** box so that the dynamic content list appears.
-<a name="template-considerations"></a>
+ 1. From the dynamic content list, select the JSON data that you want to transform.
+
+ For this example, from the dynamic content list, under **When a HTTP request is received**, select the **Body** token, which represents the body content output from the trigger.
-## Liquid template considerations
+ ![Screenshot showing Consumption workflow, Liquid action's "Content" property, an open dynamic content list, and "Body" token selected.](./media/logic-apps-enterprise-integration-liquid-transform/select-body-consumption.png)
-* Liquid templates follow the [file size limits for maps](../logic-apps/logic-apps-limits-and-config.md#artifact-capacity-limits) in Azure Logic Apps.
+1. For the **Map** property, open the **Map** list, and select your Liquid template.
-* The **Transform JSON to JSON - Liquid** action follows the [DotLiquid implementation for Liquid](https://github.com/dotliquid/dotliquid). This implementation is a port to the .NET Framework from the [Shopify implementation for Liquid](https://shopify.github.io/liquid/) and differs in [specific cases](https://github.com/dotliquid/dotliquid/issues).
+ This example continues with the template named **JsonToJsonTemplate**.
- Here are the known differences:
+ ![Screenshot showing Consumption workflow, Liquid action's "Map" property, and the selected template.](./media/logic-apps-enterprise-integration-liquid-transform/select-map-to-use-consumption.png)
- * The **Transform JSON to JSON - Liquid** action natively outputs a string, which can include JSON, XML, HTML, and so on. The Liquid action only indicates that the expected text output from the Liquid template's is a JSON string. The action instructs your logic app to parse input as a JSON object and applies a wrapper so that Liquid can interpret the JSON structure. After the transformation, the action instructs your logic app to parse the text output from Liquid back to JSON.
+ > [!NOTE]
+ >
+ > If the maps list is empty, most likely your logic app resource isn't linked to your integration account.
+ > Make sure to [link your logic app resource to the integration account that has the Liquid template or map](logic-apps-enterprise-integration-create-integration-account.md?tabs=consumption#link-account).
- DotLiquid doesn't natively understand JSON, so make sure that you escape the backslash character (`\`) and any other reserved JSON characters.
+ When you're done, the action looks similar to the following example:
- * If your template uses [Liquid filters](https://shopify.github.io/liquid/basics/introduction/#filters), make sure that you follow the [DotLiquid and C# naming conventions](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers#filter-and-output-casing), which use *sentence casing*. For all Liquid transforms, make sure that filter names in your template also use sentence casing. Otherwise, the filters won't work.
+ ![Screenshot showing Consumption workflow with finished "Transform JSON to JSON" action.](./media/logic-apps-enterprise-integration-liquid-transform/finished-transform-action-consumption.png)
- For example, when you use the `replace` filter, use `Replace`, not `replace`. The same rule applies if you try out examples at [DotLiquid online](http://dotliquidmarkup.org/TryOnline). For more information, see [Shopify Liquid filters](https://shopify.dev/docs/themes/liquid/reference/filters) and [DotLiquid Liquid filters](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Developers#create-your-own-filters). The Shopify specification includes examples for each filter, so for comparison, you can try these examples at [DotLiquid - Try online](http://dotliquidmarkup.org/TryOnline).
+### [Standard](#tab/standard)
- * The `json` filter from the Shopify extension filters is currently [not implemented in DotLiquid](https://github.com/dotliquid/dotliquid/issues/384). Typically, you can use this filter to prepare text output for JSON string parsing, but instead, you need to use the `Replace` filter instead.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer, if not already open.
- * The standard `Replace` filter in the [DotLiquid implementation](https://github.com/dotliquid/dotliquid/blob/b6a7d992bf47e7d7dcec36fb402f2e0d70819388/src/DotLiquid/StandardFilters.cs#L425) uses [regular expression (RegEx) matching](/dotnet/standard/base-types/regular-expression-language-quick-reference), while the [Shopify implementation](https://shopify.github.io/liquid/filters/replace/) uses [simple string matching](https://github.com/Shopify/liquid/issues/202). Both implementations appear to work the same way until you use a RegEx-reserved character or an escape character in the match parameter.
+1. If your workflow doesn't have a trigger or any other actions that your workflow needs, add those operations first. Liquid operations don't have any triggers available.
- For example, to escape the RegEx-reserved backslash (`\`) escape character, use `| Replace: '\\', '\\'`, and not `| Replace: '\', '\\'`. These examples show how the `Replace` filter behaves differently when you try to escape the backslash character. While this version works successfully:
+ This example continues with the Request trigger named **When a HTTP request is received**.
- `{ "SampleText": "{{ 'The quick brown fox "jumped" over the sleeping dog\\' | Replace: '\\', '\\' | Replace: '"', '\"'}}"}`
+1. On the designer, under the step where you want to add the Liquid action, select the plus sign (**+**), and then select **Add an action**.
- With this result:
+1. On the **Add an action** pane that appears, under the search box, select **Built-in**.
- `{ "SampleText": "The quick brown fox \"jumped\" over the sleeping dog\\\\"}`
+1. In the search box, enter **liquid**. From the actions list, select the Liquid action that you want to use.
- This version fails:
+ This example continues using the action named **Transform JSON to JSON**.
- `{ "SampleText": "{{ 'The quick brown fox "jumped" over the sleeping dog\\' | Replace: '\', '\\' | Replace: '"', '\"'}}"}`
+ ![Screenshot showing Standard workflow with a Liquid action selected.](./media/logic-apps-enterprise-integration-liquid-transform/select-liquid-action-standard.png)
- With this error:
+1. In the action's **Content** property, provide the JSON output from the trigger or a previous action that you want to transform by following these steps.
- `{ "SampleText": "Liquid error: parsing "\" - Illegal \ at end of pattern."}`
+ 1. Click inside the **Content** box so that the dynamic content list appears.
- For more information, see [Replace standard filter uses RegEx pattern matching...](https://github.com/dotliquid/dotliquid/issues/385).
+ 1. From the dynamic content list, select the JSON data that you want to transform.
- * The `Sort` filter in the [DotLiquid implementation](https://github.com/dotliquid/dotliquid/blob/b6a7d992bf47e7d7dcec36fb402f2e0d70819388/src/DotLiquid/StandardFilters.cs#L326) sorts items in an array or collection by property but with these differences:<p>
+ For this example, in the dynamic content list, under **When a HTTP request is received**, select the **Body** token, which represents the body content output from the trigger.
- * Follows [Shopify's sort_natural behavior](https://shopify.github.io/liquid/filters/sort_natural/), not [Shopify's sort behavior](https://shopify.github.io/liquid/filters/sort/).
+ ![Screenshot showing Standard workflow, Liquid action's "Content" property with dynamic content list opened, and "Body" token selected.](./media/logic-apps-enterprise-integration-liquid-transform/select-body-standard.png)
- * Sorts only in string-alphanumeric order. For more information, see [Numeric sort](https://github.com/Shopify/liquid/issues/980).
+1. From the **Source** list, select either **LogicApp** or **IntegrationAccount** as your Liquid template source.
+
+ This example continues by selecting **IntegrationAccount**.
+
+ ![Screenshot showing Standard workflow with "Source" property and "IntegrationAccount" selected.](./media/logic-apps-enterprise-integration-liquid-transform/select-logic-app-integration-account.png)
+
+1. From the **Name** list, select your Liquid template.
+
+ This example continues with the template named **JsonToJsonTemplate**.
+
+ ![Screenshot showing Standard workflow and selected template.](./media/logic-apps-enterprise-integration-liquid-transform/select-map-to-use-standard.png)
+
+ > [!NOTE]
+ >
+ > If the maps list is empty, most likely your logic app resource isn't linked to your integration account.
+ > Make sure to [link your logic app resource to the integration account that has the Liquid template or map](logic-apps-enterprise-integration-create-integration-account.md?tabs=standard#link-account).
+
+ When you're done, the action looks similar to the following example:
- * Uses *case-insensitive* order, not case-sensitive order. For more information, see [Sort filter does not follow casing behavior from Shopify's specification]( https://github.com/dotliquid/dotliquid/issues/393).
+ ![Screenshot showing Standard workflow with finished "Transform JSON to JSON" action.](./media/logic-apps-enterprise-integration-liquid-transform/finished-transform-action-standard.png)
+++
+## Test your workflow
+
+1. By using [Postman](https://www.getpostman.com/postman) or a similar tool and the `POST` method, send a call to the Request trigger's URL and include the JSON input to transform, for example:
+
+ ```json
+ {
+ "devices": "Surface, Mobile, Desktop computer, Monitors",
+ "firstName": "Dean",
+ "lastName": "Ledet",
+ "phone": "(111)0001111"
+ }
+ ```
+
+1. After your workflow finishes running, go to the workflow's run history, and examine the **Transform JSON to JSON** action's inputs and outputs, for example:
+
+ ![Screenshot showing example output.](./media/logic-apps-enterprise-integration-liquid-transform/example-output-jsontojson.png)
<a name="other-transformations"></a>
-## Other transformations using Liquid
+## Other Liquid transformations
-Liquid isn't limited to only JSON transformations. You can also use Liquid to perform other transformations, for example:
+You can use Liquid to perform other transformations, for example:
* [JSON to text](#json-text) * [XML to JSON](#xml-json)
Liquid isn't limited to only JSON transformations. You can also use Liquid to pe
### Transform JSON to text
-Here's the Liquid template that's used for this example:
+The following Liquid template shows an example transformation for JSON to text:
```json {{content.firstName | Append: ' ' | Append: content.lastName}} ```
-Here are the sample inputs and outputs:
+The following example shows the sample inputs and outputs:
-![Example output JSON to text](./media/logic-apps-enterprise-integration-liquid-transform/example-output-jsontotext.png)
+![Screenshot showing example output for JSON to text transformation.](./media/logic-apps-enterprise-integration-liquid-transform/example-output-jsontotext.png)
<a name="xml-json"></a> ### Transform XML to JSON
-Here's the Liquid template that's used for this example:
+The following Liquid template shows an example transformation for XML to JSON:
-``` json
+```json
[{% JSONArrayFor item in content -%} {{item}} {% endJSONArrayFor -%}]
Here's the Liquid template that's used for this example:
The `JSONArrayFor` loop is a custom looping mechanism for XML input so that you can create JSON payloads that avoid a trailing comma. Also, the `where` condition for this custom looping mechanism uses the XML element's name for comparison, rather than the element's value like other Liquid filters. For more information, see [Deep Dive on set-body Policy - Collections of Things](https://azure.microsoft.com/blog/deep-dive-on-set-body-policy).
-Here are the sample inputs and outputs:
+The following example shows the sample inputs and outputs:
-![Example output XML to JSON](./media/logic-apps-enterprise-integration-liquid-transform/example-output-xmltojson.png)
+![Screenshot showing example output for XML to JSON transformation.](./media/logic-apps-enterprise-integration-liquid-transform/example-output-xmltojson.png)
<a name="xml-text"></a> ### Transform XML to text
-Here's the Liquid template that's used for this example:
+The following Liquid template shows an example transformation for XML to text:
-``` json
+```json
{{content.firstName | Append: ' ' | Append: content.lastName}} ```
-Here are the sample inputs and outputs:
+The following example shows the sample inputs and outputs:
+
+![Screenshot showing example output for XML to text transformation.](./media/logic-apps-enterprise-integration-liquid-transform/example-output-xmltotext.png)
+
+<a name="template-considerations"></a>
+
+## Liquid template considerations
+
+* Liquid templates follow the [file size limits for maps](logic-apps-limits-and-config.md#artifact-capacity-limits) in Azure Logic Apps.
+
+* The **Transform JSON to JSON** action follows the [DotLiquid implementation for Liquid](https://github.com/dotliquid/dotliquid). This implementation is a port to the .NET Framework from the [Shopify implementation for Liquid](https://shopify.github.io/liquid/) and differs in [specific cases](https://github.com/dotliquid/dotliquid/issues).
+
+ The following list describes the known differences:
+
+ * The **Transform JSON to JSON** action natively outputs a string, which can include JSON, XML, HTML, and so on. The Liquid action only indicates that the expected text output from the Liquid template's is a JSON string. The action instructs your logic app to parse input as a JSON object and applies a wrapper so that Liquid can interpret the JSON structure. After the transformation, the action instructs your logic app to parse the text output from Liquid back to JSON.
+
+ DotLiquid doesn't natively understand JSON, so make sure that you escape the backslash character (`\`) and any other reserved JSON characters.
+
+ * If your template uses [Liquid filters](https://shopify.github.io/liquid/basics/introduction/#filters), make sure that you follow the [DotLiquid and C# naming conventions](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers#filter-and-output-casing), which use *sentence casing*. For all Liquid transforms, make sure that filter names in your template also use sentence casing. Otherwise, the filters won't work.
+
+ For example, when you use the `replace` filter, use `Replace`, not `replace`. The same rule applies if you try out examples at [DotLiquid online](http://dotliquidmarkup.org/TryOnline). For more information, see [Shopify Liquid filters](https://shopify.dev/docs/themes/liquid/reference/filters) and [DotLiquid Liquid filters](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Developers#create-your-own-filters). The Shopify specification includes examples for each filter, so for comparison, you can try these examples at [DotLiquid - Try online](http://dotliquidmarkup.org/TryOnline).
+
+ * The `json` filter from the Shopify extension filters is currently [not implemented in DotLiquid](https://github.com/dotliquid/dotliquid/issues/384). Typically, you can use this filter to prepare text output for JSON string parsing, but instead, you need to use the `Replace` filter instead.
+
+ * The standard `Replace` filter in the [DotLiquid implementation](https://github.com/dotliquid/dotliquid/blob/b6a7d992bf47e7d7dcec36fb402f2e0d70819388/src/DotLiquid/StandardFilters.cs#L425) uses [regular expression (RegEx) matching](/dotnet/standard/base-types/regular-expression-language-quick-reference), while the [Shopify implementation](https://shopify.github.io/liquid/filters/replace/) uses [simple string matching](https://github.com/Shopify/liquid/issues/202). Both implementations appear to work the same way until you use a RegEx-reserved character or an escape character in the match parameter.
+
+ For example, to escape the RegEx-reserved backslash (`\`) escape character, use `| Replace: '\\', '\\'`, and not `| Replace: '\', '\\'`. These examples show how the `Replace` filter behaves differently when you try to escape the backslash character. While this version works successfully:
+
+ `{ "SampleText": "{{ 'The quick brown fox "jumped" over the sleeping dog\\' | Replace: '\\', '\\' | Replace: '"', '\"'}}"}`
+
+ With this result:
+
+ `{ "SampleText": "The quick brown fox \"jumped\" over the sleeping dog\\\\"}`
+
+ This version fails:
+
+ `{ "SampleText": "{{ 'The quick brown fox "jumped" over the sleeping dog\\' | Replace: '\', '\\' | Replace: '"', '\"'}}"}`
+
+ With this error:
+
+ `{ "SampleText": "Liquid error: parsing "\" - Illegal \ at end of pattern."}`
+
+ For more information, see [Replace standard filter uses RegEx pattern matching...](https://github.com/dotliquid/dotliquid/issues/385).
+
+ * The `Sort` filter in the [DotLiquid implementation](https://github.com/dotliquid/dotliquid/blob/b6a7d992bf47e7d7dcec36fb402f2e0d70819388/src/DotLiquid/StandardFilters.cs#L326) sorts items in an array or collection by property but with these differences:
+
+ * Follows [Shopify's sort_natural behavior](https://shopify.github.io/liquid/filters/sort_natural/), not [Shopify's sort behavior](https://shopify.github.io/liquid/filters/sort/).
+
+ * Sorts only in string-alphanumeric order. For more information, see [Numeric sort](https://github.com/Shopify/liquid/issues/980).
-![Example output XML to text](./media/logic-apps-enterprise-integration-liquid-transform/example-output-xmltotext.png)
+ * Uses *case-insensitive* order, not case-sensitive order. For more information, see [Sort filter doesn't follow casing behavior from Shopify's specification]( https://github.com/dotliquid/dotliquid/issues/393).
## Next steps
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-maps.md
Previously updated : 08/30/2022 Last updated : 08/22/2022 # Add XSLT maps to transform XML in workflows with Azure Logic Apps
The following example shows a map that references an assembly named `XslUtilitie
### [Consumption](#tab/consumption)
-After you upload any assemblies that your map references, you can now upload your map.
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
-1. In the Azure portal, if your integration account isn't already open, in the main Azure search box, enter `integration accounts`, and select **Integration accounts**.
+1. In the Azure portal search box, enter **integration accounts**, and select **Integration accounts**.
-1. Select the integration account where you want to add your map.
+1. Find and select your integration account.
-1. On your integration account's menu, select **Overview**. Under **Settings**, select **Maps**.
+1. On the integration account's navigation menu, under **Settings**, select **Maps**.
-1. On the **Maps** pane toolbar, select **Add**.
+1. On the **Maps** pane, select **Add**.
1. Continue to add either a map [up to 2 MB](#smaller-map) or [more than 2 MB](#larger-map).
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Title: MLflow and Azure Machine Learning
-description: Learn about how Azure Machine Learning uses MLflow to log metrics and artifacts from ML models, and deploy your ML models to an endpoint.
+description: Learn about how Azure Machine Learning uses MLflow to log metrics and artifacts from machine learning models, and to deploy your machine learning models to an endpoint.
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning developer platform that you're using:"]
> * [v1](v1/concept-mlflow-v1.md) > * [v2 (current version)](concept-mlflow.md)
-[MLflow](https://www.mlflow.org) is an open-source framework, designed to manage the complete machine learning lifecycle. Its ability to train and serve models on different platforms allows you to use a consistent set of tools regardless of where your experiments are running: locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
+[MLflow](https://www.mlflow.org) is an open-source framework that's designed to manage the complete machine learning lifecycle. Its ability to train and serve models on different platforms allows you to use a consistent set of tools regardless of where your experiments are running: locally on your computer, on a remote compute target, on a virtual machine, or on an Azure Machine Learning compute instance.
> [!TIP]
-> Azure Machine Learning workspaces are MLflow-compatible, meaning that you can use Azure Machine Learning workspaces in the same way you use an MLflow Tracking Server. Such compatibility has the following advantages:
-> * You can use Azure Machine Learning workspaces as your tracking server for any experiment you are running with MLflow, regardless if they run on Azure Machine Learning or not. You only need to configure MLflow to point to the workspace where the tracking should happen.
-> * You can run any training routine that uses MLflow in Azure Machine Learning without changes. Model mangagement and model deployment capabilities are also supported.
+> Azure Machine Learning workspaces are MLflow-compatible, which means you can use Azure Machine Learning workspaces in the same way that you use an MLflow tracking server. Such compatibility has the following advantages:
+> * You can use Azure Machine Learning workspaces as your tracking server for any experiment you're running with MLflow, whether it runs on Azure Machine Learning or not. You only need to configure MLflow to point to the workspace where the tracking should happen.
+> * You can run any training routine that uses MLflow in Azure Machine Learning without changes. MLflow also supports model management and model deployment capabilities.
-MLflow can manage the complete machine learning lifecycle using four core capabilities:
+MLflow can manage the complete machine learning lifecycle by using four core capabilities:
-* [Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training job metrics, parameters and model artifacts; no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
-* [Model Registries](https://mlflow.org/docs/latest/model-registry.html) is a component of MLflow that manage model's versions in a centralized repository.
-* [Model Deployments](https://mlflow.org/docs/latest/models.html#deploy-a-python-function-model-on-microsoft-azure-ml) is a component of MLflow that deploys models registered using the MLflow format to different compute targets. Because of how MLflow models are stored, there's no need to provide scoring scripts for models in such format.
-* [Projects](https://mlflow.org/docs/latest/projects.html) is a format for packaging data science code in a reusable and reproducible way, based primarily on conventions. It's supported on preview on Azure Machine Learning.
+* [Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training job metrics, parameters, and model artifacts. It doesn't matter where your experiment's environment is--locally on your computer, on a remote compute target, on a virtual machine, or on an Azure Machine Learning compute instance.
+* [Model Registry](https://mlflow.org/docs/latest/model-registry.html) is a component of MLflow that manages a model's versions in a centralized repository.
+* [Model deployment](https://mlflow.org/docs/latest/models.html#deploy-a-python-function-model-on-microsoft-azure-ml) is a capability of MLflow that deploys models registered through the MLflow format to compute targets. Because of how MLflow models are stored, there's no need to provide scoring scripts for models in such a format.
+* [Project](https://mlflow.org/docs/latest/projects.html) is a format for packaging data science code in a reusable and reproducible way, based primarily on conventions. It's supported in preview on Azure Machine Learning.
## Tracking with MLflow
-Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiment via the Azure Machine Learning Python SDK, Azure Machine Learning CLI or the Azure Machine Learning studio. We recommend using MLflow for tracking experiments. To get you started, see [Log & view metrics and log files with MLflow](how-to-log-view-metrics.md).
+Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiments via the Azure Machine Learning Python SDK, the Azure Machine Learning CLI, or Azure Machine Learning studio. We recommend using MLflow for tracking experiments. To get started, see [Log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md).
> [!NOTE]
-> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2 (preview), and it is recommended to use MLflow for logging and tracking.
+> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2 (preview). We recommend that you use MLflow for logging.
-With MLflow Tracking you can connect Azure Machine Learning as the backend of your MLflow experiments. The workspace provides a centralized, secure, and scalable location to store training metrics and models. This includes:
+With MLflow Tracking, you can connect Azure Machine Learning as the back end of your MLflow experiments. The workspace provides a centralized, secure, and scalable location to store training metrics and models. Capabilities include:
-* [Track ML experiments and models running locally or in the cloud](how-to-use-mlflow-cli-runs.md) with MLflow in Azure Machine Learning.
-* [Track Azure Databricks ML experiments](how-to-use-mlflow-azure-databricks.md) with MLflow in Azure Machine Learning.
-* [Track Azure Synapse Analytics ML experiments](how-to-use-mlflow-azure-synapse.md) with MLflow in Azure Machine Learning.
+* [Track machine learning experiments and models running locally or in the cloud](how-to-use-mlflow-cli-runs.md) with MLflow in Azure Machine Learning.
+* [Track Azure Databricks machine learning experiments](how-to-use-mlflow-azure-databricks.md) with MLflow in Azure Machine Learning.
+* [Track Azure Synapse Analytics machine learning experiments](how-to-use-mlflow-azure-synapse.md) with MLflow in Azure Machine Learning.
> [!IMPORTANT]
-> - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. RStudio or Jupyter Notebooks with R kernels are not supported. Model registries are not supported using the MLflow R SDK. As an alternative, use Azure ML CLI or Azure ML studio for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
-> - MLflow in Java support is limited to tracking experiment's metrics and parameters on Azure Machine Learning jobs. Artifacts and models can't be tracked using the MLflow Java SDK. View the following [Java example about using the MLflow tracking client with the Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/java/iris).
+> - MLflow in R support is limited to tracking an experiment's metrics, parameters, and models on Azure Machine Learning jobs. RStudio or Jupyter Notebooks with R kernels are not supported. Model registries are not supported if you're using the MLflow R SDK. As an alternative, use the Azure Machine Learning CLI or Azure Machine Learning studio for model registration and management. View an [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
+> - MLflow in Java support is limited to tracking an experiment's metrics and parameters on Azure Machine Learning jobs. Artifacts and models can't be tracked via the MLflow Java SDK. View a [Java example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/java/iris).
-To learn how to use MLflow to query experiments and runs in Azure Machine Learning, see [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md)
+To learn how to use MLflow to query experiments and runs in Azure Machine Learning, see [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
-## Model Registries with MLflow
+## Model registries with MLflow
-Azure Machine Learning supports MLflow for model management. This represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client.
+Azure Machine Learning supports MLflow for model management. This support represents a convenient way to support the entire model lifecycle for users who are familiar with the MLflow client.
-To learn more about how you can manage models using the MLflow API in Azure Machine Learning, view [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
+To learn more about how to manage models by using the MLflow API in Azure Machine Learning, view [Manage model registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
-## Model Deployments of MLflow models
+## Model deployments of MLflow models
-You can [deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md), so you can leverage and apply Azure Machine Learning's model management capabilities and no-code deployment offering. We support deploying MLflow models to both real-time and batch endpoints. You can use the `azureml-mlflow` MLflow plugin, the Azure ML CLI v2, and using the user interface in Azure Machine Learning studio.
+You can [deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) so that you can apply the model management capabilities and no-code deployment offering in Azure Machine Learning. Azure Machine Learning supports deploying models to both real-time and batch endpoints. You can use the `azureml-mlflow` MLflow plug-in, the Azure Machine Learning CLI v2, and the user interface in Azure Machine Learning studio.
Learn more at [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md).
-## Train MLflow projects (preview)
+## Training MLflow projects (preview)
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-You can submit training jobs to Azure Machine Learning using [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your jobs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
+You can submit training jobs to Azure Machine Learning by using [MLflow projects](https://www.mlflow.org/docs/latest/projects.html) (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your jobs to the cloud via [Azure Machine Learning compute](./how-to-create-attach-compute-cluster.md).
-Learn more at [Train ML models with MLflow projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
+Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
-## MLflow SDK, Azure ML v2 and Azure ML Studio capabilities
+## MLflow SDK, Azure Machine Learning v2, and Azure Machine Learning studio capabilities
-The following table shows which operations are supported by each of the tools available in the ML lifecycle.
+The following table shows which operations are supported by each of the tools available in the machine learning lifecycle.
-| Feature | MLflow SDK | Azure ML v2 (CLI/SDK) | Azure ML Studio |
+| Feature | MLflow SDK | Azure Machine Learning v2 (CLI/SDK) | Azure Machine Learning studio |
| :- | :-: | :-: | :-: |
-| Track and log metrics, parameters and models | **&check;** | | |
-| Retrieve metrics, parameters and models | **&check;**<sup>1</sup> | <sup>2</sup> | **&check;** |
+| Track and log metrics, parameters, and models | **&check;** | | |
+| Retrieve metrics, parameters, and models | **&check;**<sup>1</sup> | <sup>2</sup> | **&check;** |
| Submit training jobs with MLflow projects | **&check;** | | | | Submit training jobs with inputs and outputs | | **&check;** | **&check;** |
-| Submit training jobs using ML pipelines | | **&check;** | |
+| Submit training jobs by using machine learning pipelines | | **&check;** | |
| Manage experiments and runs | **&check;**<sup>1</sup> | **&check;** | **&check;** | | Manage MLflow models | **&check;**<sup>3</sup> | **&check;** | **&check;** | | Manage non-MLflow models | | **&check;** | **&check;** |
The following table shows which operations are supported by each of the tools av
> [!NOTE] > - <sup>1</sup> View [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md) for details. > - <sup>2</sup> Only artifacts and models can be downloaded.
-> - <sup>3</sup> View [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md) for details.
-> - <sup>4</sup> View [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) for details. Deployment of MLflow models to batch inference using the MLflow SDK is not possible by the moment.
+> - <sup>3</sup> View [Manage model registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md) for details.
+> - <sup>4</sup> View [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) for details. Deployment of MLflow models to batch inference by using the MLflow SDK is not possible at the moment.
## Example notebooks
-If you are getting started with MLflow in Azure Machine Learning, we recommend you to explore the [notebooks examples about how to user MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/readme.md):
-
-* [Training and tracking a classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments using MLflow, log models and combine multiple flavors into pipelines.
-* [Training and tracking a classifier with MLflow using Service Principal authentication](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_service_principal.ipynb): Demonstrate how to track experiments using MLflow from compute that is running outside Azure ML and how to authenticate against Azure ML services using a Service Principal.
-* [Hyper-parameters optimization using child runs with MLflow and HyperOpt optimizer](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_nested_runs.ipynb): Demonstrate how to use child runs in MLflow to do hyper-parameter optimization for models using the popular library HyperOpt. It shows how to transfer metrics, params and artifacts from child runs to parent runs.
-* [Logging models instead of assets with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/logging-models/logging_model_with_mlflow.ipynb): Demonstrates how to use the concept of models instead of artifacts with MLflow, including how to construct custom models.
-* [Manage experiments and runs with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters and artifacts from Azure ML using MLflow.
-* [Manage models registries with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/model-management/model_management.ipynb): Demonstrates how to manage models in registries using MLflow.
-* [No-code deployment with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/deploying_with_mlflow.ipynb): Demonstrates how to deploy models in MLflow format to the different deployment target in Azure ML.
-* [Training models in Azure Databricks and deploying them on Azure ML with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
-* [Migrating models with scoring scripts to MLflow format](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/migrating-scoring-to-mlflow/scoring_to_mlmodel.ipynb): Demonstrates how to migrate models with scoring scripts to no-code-deployment with MLflow.
-* [Using MLflow REST with Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb): Demonstrates how to work with MLflow REST API when connected to Azure ML.
+If you're getting started with MLflow in Azure Machine Learning, we recommend that you explore the [notebook examples about how to use MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/readme.md):
+
+* [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments by using MLflow, log models, and combine multiple flavors into pipelines.
+* [Training and tracking an XGBoost classifier with MLflow using service principal authentication](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_service_principal.ipynb): Demonstrates how to track experiments by using MLflow from compute that's running outside Azure Machine Learning. It shows how to authenticate against Azure Machine Learning services by using a service principal.
+* [Hyper-parameter optimization using Hyperopt and nested runs in MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_nested_runs.ipynb): Demonstrates how to use child runs in MLflow to do hyper-parameter optimization for models by using the popular library Hyperopt. It shows how to transfer metrics, parameters, and artifacts from child runs to parent runs.
+* [Logging models with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/logging-models/logging_model_with_mlflow.ipynb): Demonstrates how to use the concept of models instead of artifacts with MLflow, including how to construct custom models.
+* [Manage runs and experiments with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning by using MLflow.
+* [Manage model registries with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/model-management/model_management.ipynb): Demonstrates how to manage models in registries by using MLflow.
+* [Deploying models with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/deploying_with_mlflow.ipynb): Demonstrates how to deploy no-code models in MLflow format to a deployment target in Azure Machine Learning.
+* [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
+* [Migrating models with a scoring script to MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/migrating-scoring-to-mlflow/scoring_to_mlmodel.ipynb): Demonstrates how to migrate models with scoring scripts to no-code deployment with MLflow.
+* [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb): Demonstrates how to work with the MLflow REST API when you're connected to Azure Machine Learning.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
Previously updated : 01/04/2022 Last updated : 08/26/2022 #Customer intent: As a data scientist, I want to understand the purpose of a workspace for Azure Machine Learning.
Last updated 01/04/2022
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all training runs, including logs, metrics, output, and a snapshot of your scripts. You use this information to determine which training run produces the best model.
-Once you have a model you like, you register it with the workspace. You then use the registered model and scoring scripts to deploy to Azure Container Instances, Azure Kubernetes Service, or to a field-programmable gate array (FPGA) as a REST-based HTTP endpoint.
+Once you have a model you like, you register it with the workspace. You then use the registered model and scoring scripts to deploy to an [online endpoint](concept-endpoints.md) as a REST-based HTTP endpoint.
## Taxonomy
The diagram shows the following components of a workspace:
+ A workspace can contain [Azure Machine Learning compute instances](concept-compute-instance.md), cloud resources configured with the Python environment necessary to run Azure Machine Learning. + [User roles](how-to-assign-roles.md) enable you to share your workspace with other users, teams, or projects.
-+ [Compute targets](v1/concept-azure-machine-learning-architecture.md#compute-targets) are used to run your experiments.
-+ When you create the workspace, [associated resources](#resources) are also created for you.
-+ [Experiments](v1/concept-azure-machine-learning-architecture.md#experiments) are training runs you use to build your models.
-+ [Pipelines](v1/concept-azure-machine-learning-architecture.md#ml-pipelines) are reusable workflows for training and retraining your model.
-+ [Datasets](v1/concept-azure-machine-learning-architecture.md#datasets-and-datastores) aid in management of the data you use for model training and pipeline creation.
++ [Compute targets](concept-compute-target.md) are used to run your experiments.++ When you create the workspace, [associated resources](#associated-resources) are also created for you.++ Jobs are training runs you use to build your models. You can organize your jobs into Experiments.++ [Pipelines](concept-ml-pipelines.md) are reusable workflows for training and retraining your model.++ [Data assets](concept-data.md) aid in management of the data you use for model training and pipeline creation. + Once you have a model you want to deploy, you create a registered model.
-+ Use the registered model and a scoring script to create a [deployment endpoint](v1/concept-azure-machine-learning-architecture.md#endpoints).
++ Use the registered model and a scoring script to create an [online endpoint](concept-endpoints.md). ## Tools for workspace interaction You can interact with your workspace in the following ways:
-> [!IMPORTANT]
-> Tools marked (preview) below are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- + On the web: + [Azure Machine Learning studio ](https://ml.azure.com) + [Azure Machine Learning designer](concept-designer.md)
You can interact with your workspace in the following ways:
+ On the command line using the Azure Machine Learning [CLI extension](how-to-configure-cli.md) + [Azure Machine Learning VS Code Extension](how-to-manage-resources-vscode.md#workspaces) - ## Machine learning with a workspace Machine learning tasks read and/or write artifacts to your workspace.
-+ Run an experiment to train a model - writes experiment run results to the workspace.
++ Run an experiment to train a model - writes job run results to the workspace. + Use automated ML to train a model - writes training results to the workspace. + Register a model in the workspace. + Deploy a model - uses the registered model to create a deployment. + Create and run reusable workflows.
-+ View machine learning artifacts such as experiments, pipelines, models, deployments.
++ View machine learning artifacts such as jobs, pipelines, models, deployments. + Track and monitor models. ## Workspace management You can also perform the following workspace management tasks:
-| Workspace management task | Portal | Studio | Python SDK | Azure CLI | VS Code
-|||||||
-| Create a workspace | **&check;** | | **&check;** | **&check;** | **&check;** |
-| Manage workspace access | **&check;** || | **&check;** ||
-| Create and manage compute resources | **&check;** | **&check;** | **&check;** | **&check;** ||
-| Create a Notebook VM | | **&check;** | | ||
+| Workspace management task | Portal | Studio | Python SDK | Azure CLI | VS Code |
+|-|-|-|-|-|-|
+| Create a workspace | **&check;** | **&check;** | **&check;** | **&check;** | **&check;** |
+| Manage workspace access | **&check;** | | | **&check;** | |
+| Create and manage compute resources | **&check;** | **&check;** | **&check;** | **&check;** | **&check;** |
+| Create a compute instance | | **&check;** | **&check;** | **&check;** | **&check;** |
> [!WARNING] > Moving your Azure Machine Learning workspace to a different subscription, or moving the owning subscription to a new tenant, is not supported. Doing so may cause errors.
-## <a name='create-workspace'></a> Create a workspace
+## Create a workspace
There are multiple ways to create a workspace:
-* Use the [Azure portal](quickstart-create-resources.md) for a point-and-click interface to walk you through each step.
-* Use the [Azure Machine Learning SDK for Python](how-to-manage-workspace.md?tabs=python#create-a-workspace) to create a workspace on the fly from Python scripts or Jupyter notebooks
+* Use [Azure Machine Learning studio](quickstart-create-resources.md) to quickly create a workspace with default settings.
+* Use the [Azure portal](how-to-manage-workspace.md?tabs=azure-portal#create-a-workspace) for a point-and-click interface with more options.
+* Use the [Azure Machine Learning SDK for Python](how-to-manage-workspace.md?tabs=python#create-a-workspace) to create a workspace on the fly from Python scripts or Jupyter notebooks.
* Use an [Azure Resource Manager template](how-to-create-workspace-template.md) or the [Azure Machine Learning CLI](how-to-configure-cli.md) when you need to automate or customize the creation with corporate security standards. * If you work in Visual Studio Code, use the [VS Code extension](how-to-manage-resources-vscode.md#create-a-workspace). > [!NOTE] > The workspace name is case-insensitive.
-## <a name="sub-resources"></a> Sub resources
+## Sub resources
These sub resources are the main resources that are made in the AzureML workspace.
These sub resources are the main resources that are made in the AzureML workspac
* Virtual Network: these help Azure resources communicate with one another, the internet, and other on-premises networks. * Bandwidth: encapsulates all outbound data transfers across regions.
-## <a name="resources"></a> Associated resources
+## Associated resources
When you create a new workspace, it automatically creates several Azure resources that are used by the workspace:
When you create a new workspace, it automatically creates several Azure resource
> [!NOTE] > You can instead use existing Azure resource instances when you create the workspace with the [Python SDK](how-to-manage-workspace.md?tabs=python#create-a-workspace) or the Azure Machine Learning CLI [using an ARM template](how-to-create-workspace-template.md).
-<a name="wheres-enterprise"></a>
-
-## What happened to Enterprise edition
-
-As of September 2020, all capabilities that were available in Enterprise edition workspaces are now also available in Basic edition workspaces.
-New Enterprise workspaces can no longer be created. Any SDK, CLI, or Azure Resource Manager calls that use the `sku` parameter will continue to work but a Basic workspace will be provisioned.
-
-Beginning December 21st, all Enterprise Edition workspaces will be automatically set to Basic Edition, which has the same capabilities. No downtime will occur during this process. On January 1, 2021, Enterprise Edition will be formally retired.
-
-In either editions, customers are responsible for the costs of Azure resources consumed and will not need to pay any additional charges for Azure Machine Learning. Please refer to the [Azure Machine Learning pricing page](https://azure.microsoft.com/pricing/details/machine-learning/) for more details.
- ## Next steps To learn more about planning a workspace for your organization's requirements, see [Organize and set up Azure Machine Learning](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-resource-organization).
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
For more information on working with resource groups, see [az group](/cli/azure/
## Create a workspace
-When you deploy an Azure Machine Learning workspace, various other services are [required as dependent associated resources](./concept-workspace.md#resources). When you use the CLI to create the workspace, the CLI can either create new associated resources on your behalf or you could attach existing resources.
+When you deploy an Azure Machine Learning workspace, various other services are [required as dependent associated resources](./concept-workspace.md#associated-resources). When you use the CLI to create the workspace, the CLI can either create new associated resources on your behalf or you could attach existing resources.
> [!IMPORTANT] > When attaching your own storage account, make sure that it meets the following criteria:
machine-learning How To Manage Workspace Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-terraform.md
Create the Terraform configuration file that declares the Azure provider:
## Deploy a workspace
-The following Terraform configurations can be used to create an Azure Machine Learning workspace. When you create an Azure Machine Learning workspace, various other services are required as dependencies. The template also specifies these [associated resources to the workspace](./concept-workspace.md#resources). Depending on your needs, you can choose to use the template that creates resources with either public or private network connectivity.
+The following Terraform configurations can be used to create an Azure Machine Learning workspace. When you create an Azure Machine Learning workspace, various other services are required as dependencies. The template also specifies these [associated resources to the workspace](./concept-workspace.md#associated-resources). Depending on your needs, you can choose to use the template that creates resources with either public or private network connectivity.
# [Public network connectivity](#tab/publicworkspace)
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
# Manage Azure Machine Learning workspaces in the portal or with the Python SDK
-In this article, you create, view, and delete [**Azure Machine Learning workspaces**](concept-workspace.md) for [Azure Machine Learning](overview-what-is-azure-machine-learning.md), using the Azure portal or the [SDK for Python](/python/api/overview/azure/ml/)
+In this article, you create, view, and delete [**Azure Machine Learning workspaces**](concept-workspace.md) for [Azure Machine Learning](overview-what-is-azure-machine-learning.md), using the Azure portal or the [SDK for Python](/python/api/overview/azure/ml/).
As your needs change or requirements for automation increase you can also manage workspaces [using the CLI](v1/reference-azure-machine-learning-cli.md), or [via the VS Code extension](how-to-setup-vs-code.md).
As your needs change or requirements for automation increase you can also manage
## Create a workspace
+You can create a workspace [directly in Azure Machine Learning studio](./quickstart-create-resources.md#create-the-workspace), with limited options available. Or use one of the methods below for more control of options.
+ # [Python](#tab/python) [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
Previously updated : 06/27/2022 Last updated : 08/29/2022
You can also create custom alerts to notify you of important status updates to y
There are three logs that can be enabled for online endpoints:
-* **AMLOnlineEndpointTrafficLog**: You could choose to enable traffic logs if you want to check the information of your request. Below are some cases:
+* **AMLOnlineEndpointTrafficLog** (preview): You could choose to enable traffic logs if you want to check the information of your request. Below are some cases:
* If the response isn't 200, check the value of the column ΓÇ£ResponseCodeReasonΓÇ¥ to see what happened. Also check the reason in the "HTTPS status codes" section of the [Troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md#http-status-codes) article.
There are three logs that can be enabled for online endpoints:
* You may also use this log for performance analysis in determining the time required by the model to process each request.
-* **AMLOnlineEndpointEventLog**: Contains event information regarding the containerΓÇÖs life cycle. Currently, we provide information on the following types of events:
+* **AMLOnlineEndpointEventLog** (preview): Contains event information regarding the containerΓÇÖs life cycle. Currently, we provide information on the following types of events:
| Name | Message | | -- | -- |
You can find example queries on the __Queries__ tab while viewing logs. Search f
The following tables provide details on the data stored in each log:
-**AMLOnlineEndpointTrafficLog**
+**AMLOnlineEndpointTrafficLog** (preview)
| Field name | Description | | - | - |
The following tables provide details on the data stored in each log:
| ContainerName | The name of the container where the log was generated. | Message | The content of the log.
-**AMLOnlineEndpointEventLog**
+**AMLOnlineEndpointEventLog** (preview)
| Field Name | Description |
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
In this article you learn how to secure the following training compute resources
* If you create a compute instance and plan to use the no public IP address configuration, your Azure Machine Learning workspace's managed identity must be assigned the __Reader__ role for the virtual network that contains the workspace. For more information on assigning roles, see [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md).
-* If you have configured Azure Container Registry for your workspace behind the virtual network, you must use a compute cluster to build Docker images. You can't use a compute cluster with the no public IP address configuration. For more information, see [Enable Azure Container Registry](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
+* If you have configured Azure Container Registry for your workspace behind the virtual network, you must use a compute cluster to build Docker images. If you use a compute cluster configured for no public IP address, you must provide some method for the cluster to access the public internet. Internet access is required when accessing images stored on the Microsoft Container Registry, packages installed on Pypi, Conda, etc. For more information, see [Enable Azure Container Registry](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
* If the Azure Storage Accounts for the workspace are also in the virtual network, use the following guidance on subnet limitations:
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Azure Container Registry can be configured to use a private endpoint. Use the fo
> [!IMPORTANT] > The following limitations apply When using a compute cluster for image builds: > * Only a CPU SKU is supported.
- > * You can't use a compute cluster configured for no public IP address.
+ > * If you use a compute cluster configured for no public IP address, you must provide some way for the cluster to access the public internet. Internet access is required when accessing images stored on the Microsoft Container Registry, packages installed on Pypi, Conda, etc. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use the public IP of your firewall, or you can use [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) with a public IP. For more information, see [How to securely train in a VNet](how-to-secure-training-vnet.md).
# [Azure CLI](#tab/cli)
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Previously updated : 10/21/2021 Last updated : 08/26/2022 adobe-target: true #Customer intent: As a data scientist, I want to create a workspace so that I can start to use Azure Machine Learning.
The workspace is the top-level resource for your machine learning activities, pr
## Create the workspace
-If you already have a workspace, skip this section and continue to [Create a compute instance](#instance).
+If you already have a workspace, skip this section and continue to [Create a compute instance](#create-compute-instance).
-If you don't yet have a workspace, create one now:
+If you don't yet have a workspace, create one now:
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com)
+1. Select **Create workspace**
+1. Provide the following information to configure your new workspace:
+ Field|Description
+ |
+ Workspace name |Enter a unique name that identifies your workspace. Names must be unique across the resource group. Use a name that's easy to recall and to differentiate from workspaces created by others. The workspace name is case-insensitive.
+ Subscription |Select the Azure subscription that you want to use.
+ Resource group | Use an existing resource group in your subscription or enter a name to create a new resource group. A resource group holds related resources for an Azure solution. You need *contributor* or *owner* role to use an existing resource group. For more information about access, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+ Region | Select the Azure region closest to your users and the data resources to create your workspace.
+1. Select **Create** to create the workspace
-## <a name="instance"></a> Create compute instance
+## Create compute instance
You could install Azure Machine Learning on your own computer. But in this quickstart, you'll create an online compute resource that has a development environment already installed and ready to go. You'll use this online machine, a *compute instance*, for your development environment to write and run code in Python scripts and Jupyter notebooks. Create a *compute instance* to use this development environment for the rest of the tutorials and quickstarts.
-1. If you didn't select **Go to workspace** in the previous section, sign in to [Azure Machine Learning studio](https://ml.azure.com) now, and select your workspace.
+1. If you didn't just create a workspace in the previous section, sign in to [Azure Machine Learning studio](https://ml.azure.com) now, and select your workspace.
1. On the left side, select **Compute**. 1. Select **+New** to create a new compute instance. 1. Supply a name, Keep all the defaults on the first page.
Create a *compute instance* to use this development environment for the rest of
In about two minutes, you'll see the **State** of the compute instance change from *Creating* to *Running*. It's now ready to go.
-## <a name="cluster"></a> Create compute clusters
+## Create compute clusters
Next you'll create a compute cluster. Clusters allow you to distribute a training or batch inference process across a cluster of CPU or GPU compute nodes in the cloud.
In less than a minute, the **State** of the cluster will change from *Creating*
> [!NOTE] > When the cluster is created, it will have 0 nodes provisioned. The cluster *does not* incur costs until you submit a job. This cluster will scale down when it has been idle for 2,400 seconds (40 minutes). This will give you time to use it in a few tutorials if you wish without waiting for it to scale back up.
-## <a name="studio"></a> Quick tour of the studio
+## Quick tour of the studio
The studio is your web portal for Azure Machine Learning. This portal combines no-code and code-first experiences for an inclusive data science platform.
Review the parts of the studio on the left-hand navigation bar:
[!INCLUDE [machine-learning-workspace-diagnostics](../../includes/machine-learning-workspace-diagnostics.md)]
-## <a name="clean-up"></a>Clean up resources
+## Clean up resources
If you plan to continue now to the next tutorial, skip to [Next steps](#next-steps).
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
Previously updated : 07/18/2022 Last updated : 08/29/2022 #Customer intent: This tutorial is intended to introduce Azure ML to data scientists who want to scale up or publish their ML projects. By completing a familiar end-to-end project, which starts by loading the data and ends by creating and calling an online inference endpoint, the user should become familiar with the core concepts of Azure ML and their most common usage. Each step of this tutorial can be modified or performed in other ways that might have security or scalability advantages. We will cover some of those in the Part II of this tutorial, however, we suggest the reader use the provide links in each section to learn more on each topic.
First you'll install the v2 SDK on your compute instance:
1. From the list of **Compute Instances**, find the one you created.
-1. Select on "Terminal", to open the terminal session on the compute instance.
+1. Select "Terminal", to open the terminal session on the compute instance.
1. In the terminal window, install Python SDK v2 (preview) with this command:
The Azure ML framework can be used from CLI, Python SDK, or studio interface. In
Before creating the pipeline, you'll set up the resources the pipeline will use:
-* The dataset for training
+* The data asset for training
* The software environment to run the pipeline * A compute resource to where the job will run
machine-learning Concept Mlflow V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-mlflow-v1.md
Title: MLflow and Azure Machine Learning (v1)
-description: Learn about MLflow with Azure Machine Learning to log metrics and artifacts from ML models, and deploy your ML models as a web service.
+description: Learn about how Azure Machine Learning uses MLflow to log metrics and artifacts from machine learning models, and to deploy your machine learning models as a web service.
[!INCLUDE [dev v1](../../../includes/machine-learning-dev-v1.md)]
-> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"]
+> [!div class="op_single_selector" title1="Select the version of the Azure Machine Learning developer platform that you're using:"]
> * [v1](concept-mlflow-v1.md) > * [v2 (current version)](../concept-mlflow.md)
-[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine, or an Azure Databricks cluster.
+[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLflow's tracking URI and logging API are collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api). This component of MLflow logs and tracks your training run metrics and model artifacts, no matter where your experiment's environment is--on your computer, on a remote compute target, on a virtual machine, or in an Azure Databricks cluster.
-Together, MLflow Tracking and Azure Machine learning allow you to track an experiment's run metrics and store model artifacts in your Azure Machine Learning workspace. That experiment could've been run locally on your computer, on a remote compute target or a virtual machine.
+Together, MLflow Tracking and Azure Machine learning allow you to track an experiment's run metrics and store model artifacts in your Azure Machine Learning workspace.
## Compare MLflow and Azure Machine Learning clients
- The following table summarizes the different clients that can use Azure Machine Learning, and their respective function capabilities.
+The following table summarizes the clients that can use Azure Machine Learning and their respective capabilities.
- MLflow Tracking offers metric logging and artifact storage functionalities that are only otherwise available via the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
+MLflow Tracking offers metric logging and artifact storage functionalities that are otherwise available only through the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
-| Capability | MLflow Tracking & Deployment | Azure Machine Learning Python SDK | Azure Machine Learning CLI | Azure Machine Learning studio|
+| Capability | MLflow Tracking and deployment | Azure Machine Learning Python SDK | Azure Machine Learning CLI | Azure Machine Learning studio|
||||||
-| Manage workspace | | Γ£ô | Γ£ô | Γ£ô |
+| Manage a workspace | | Γ£ô | Γ£ô | Γ£ô |
| Use data stores | | Γ£ô | Γ£ô | | | Log metrics | Γ£ô | Γ£ô | | | | Upload artifacts | Γ£ô | Γ£ô | | | | View metrics | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Manage compute | | Γ£ô | Γ£ô | Γ£ô | | Deploy models | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-|Monitor model performance||Γ£ô| | |
+| Monitor model performance ||Γ£ô| | |
| Detect data drift | | Γ£ô | | Γ£ô | ## Track experiments
-With MLflow Tracking you can connect Azure Machine Learning as the backend of your MLflow experiments. By doing so, you can do the following tasks,
+With MLflow Tracking, you can connect Azure Machine Learning as the back end of your MLflow experiments. You can then do the following tasks:
-+ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](concept-azure-machine-learning-architecture.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models. Learn more at [Track ML models with MLflow and Azure Machine Learning](../how-to-use-mlflow.md).
++ Track and log experiment metrics and artifacts in your [Azure Machine Learning workspace](concept-azure-machine-learning-architecture.md#workspace). If you already use MLflow Tracking for your experiments, the workspace provides a centralized, secure, and scalable location to store training metrics and models. Learn more at [Track machine learning models with MLflow and Azure Machine Learning](../how-to-use-mlflow.md).
-+ Track and manage models in MLflow and Azure Machine Learning model registry.
++ Track and manage models in MLflow and the Azure Machine Learning model registry. + [Track Azure Databricks training runs](../how-to-use-mlflow-azure-databricks.md).
With MLflow Tracking you can connect Azure Machine Learning as the backend of yo
[!INCLUDE [preview disclaimer](../../../includes/machine-learning-preview-generic-disclaimer.md)]
-You can use MLflow's tracking URI and logging API, collectively known as MLflow Tracking, to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](../how-to-create-attach-compute-cluster.md).
+You can use MLflow Tracking to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning back-end support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud via [Azure Machine Learning compute](../how-to-create-attach-compute-cluster.md).
-Learn more at [Train ML models with MLflow projects and Azure Machine Learning (preview)](../how-to-train-mlflow-projects.md).
+Learn more at [Train machine learning models with MLflow projects and Azure Machine Learning (preview)](../how-to-train-mlflow-projects.md).
## Deploy MLflow experiments
-You can [deploy your MLflow model as an Azure web service](../how-to-deploy-mlflow-models.md), so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models.
+You can [deploy your MLflow model as an Azure web service](../how-to-deploy-mlflow-models.md) so that you can apply the model management and data drift detection capabilities in Azure Machine Learning to your production models.
## Next steps
-* [Track ML models with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
-* [Train ML models with MLflow projects and Azure Machine Learning (preview)](../how-to-train-mlflow-projects.md).
-* [Track Azure Databricks runs with MLflow](../how-to-use-mlflow-azure-databricks.md).
-* [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
+* [Track machine learning models with MLflow and Azure Machine Learning](how-to-use-mlflow.md)
+* [Train machine learning models with MLflow projects and Azure Machine Learning (preview)](../how-to-train-mlflow-projects.md)
+* [Track Azure Databricks runs with MLflow](../how-to-use-mlflow-azure-databricks.md)
+* [Deploy models with MLflow](how-to-deploy-mlflow-models.md)
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md
For more information on working with resource groups, see [az group](/cli/azure/
## Create a workspace
-When you deploy an Azure Machine Learning workspace, various other services are [required as dependent associated resources](../concept-workspace.md#resources). When you use the CLI to create the workspace, the CLI can either create new associated resources on your behalf or you could attach existing resources.
+When you deploy an Azure Machine Learning workspace, various other services are [required as dependent associated resources](../concept-workspace.md#associated-resources). When you use the CLI to create the workspace, the CLI can either create new associated resources on your behalf or you could attach existing resources.
> [!IMPORTANT] > When attaching your own storage account, make sure that it meets the following criteria:
marketplace Azure App Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-apis.md
Previously updated : 07/01/2021 Last updated : 8/29/2022 # Partner Center submission API to onboard Azure apps in Partner Center
marketplace Submission Api Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/submission-api-onboard.md
Previously updated : 09/22/2021 Last updated : 8/29/2022 # Partner Center submission API onboarding
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* The cluster must have a minimum of three worker nodes and three manager nodes. * Don't scale the cluster workers to zero, or attempt a cluster shutdown. Deallocating or powering down any virtual machine in the cluster resource group is not supported. * Don't have taints that prevent OpenShift components to be scheduled.
-* Don't remove or modify the cluster Prometheus and Alertmanager services.
+* Don't remove or modify the cluster Prometheus service.
+* Don't remove or modify the cluster Alertmanager service or Default receiver. It *is* supported to create additional receivers to notify external systems.
* Don't remove Service Alertmanager rules. * Security groups can't be modified. Any attempt to modify security groups will be reverted. * Don't remove or modify Azure Red Hat OpenShift service logging (mdsd pods).
orbital Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-network.md
Ensure the objects comply with the recommendations in this article. Note, these
## Prepare subnet for VNET injection Prerequisites:-- An entire subnet that can be dedicated to Orbital GSaaS in your virtual network in your resource group.
+- An entire subnet with no existing IPs allocated or in use that can be dedicated to Orbital GSaaS in your virtual network in your resource group.
Steps: 1. Delegate a subnet to service named: Microsoft.Orbital/orbitalGateways. Follow instructions here: [Add or remove a subnet delegation in an Azure virtual network](../virtual-network/manage-subnet-delegation.md).
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/rbac-and-directory-admin-roles.md
na Previously updated : 05/20/2021 Last updated : 08/29/2022
For more information, see [Azure classic subscription administrators](classic-ad
### Azure account and Azure subscriptions
-An Azure account represents a billing relationship. An Azure account is a user identity, one or more Azure subscriptions, and an associated set of Azure resources. The person who creates the account is the Account Administrator for all subscriptions created in that account. That person is also the default Service Administrator for the subscription.
+An Azure account is used to establish a billing relationship. An Azure account is a user identity, one or more Azure subscriptions, and an associated set of Azure resources. The person who creates the account is the Account Administrator for all subscriptions created in that account. That person is also the default Service Administrator for the subscription.
Azure subscriptions help you organize access to Azure resources. They also help you control how resource usage is reported, billed, and paid for. Each subscription can have a different billing and payment setup, so you can have different subscriptions and different plans by office, department, project, and so on. Every service belongs to a subscription, and the subscription ID may be required for programmatic operations.
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
The reason is likely a replication delay. The service principal is created in on
Set the `principalType` property to `ServicePrincipal` when creating the role assignment. You must also set the `apiVersion` of the role assignment to `2018-09-01-preview` or later. For more information, see [Assign Azure roles to a new service principal using the REST API](role-assignments-rest.md#new-service-principal) or [Assign Azure roles to a new service principal using Azure Resource Manager templates](role-assignments-template.md#new-service-principal).
+### Symptom - ARM template role assignment returns BadRequest status
+
+When you try to deploy an ARM template that assigns a role to a service principal you get the error:
+
+`Tenant ID, application ID, principal ID, and scope are not allowed to be updated. (code: RoleAssignmentUpdateNotPermitted)`
+
+**Cause**
+
+The role assignment `name` is not unique, and it is viewed as an update.
+
+**Solution**
+Provide an idempotent unique value for the role assignment `name`
+
+```
+{
+ "type": "Microsoft.Authorization/roleAssignments",
+ "apiVersion": "2018-09-01-preview",
+ "name": "[guid(concat(resourceGroup().id, variables('resourceName'))]",
+ "properties": {
+ "roleDefinitionId": "[variables('roleDefinitionId')]",
+ "principalId": "[variables('principalId')]"
+ }
+}
+```
+ ### Symptom - Role assignments with identity not found In the list of role assignments for the Azure portal, you notice that the security principal (user, group, service principal, or managed identity) is listed as **Identity not found** with an **Unknown** type.
search Search Get Started Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-dotnet.md
ms.devlang: csharp Previously updated : 06/11/2021 Last updated : 08/29/2022 # Quickstart: Create a search index using the Azure.Search.Documents client library
-Use the new [Azure.Search.Documents (version 11) client library](/dotnet/api/overview/azure/search.documents-readme) to create a .NET Core console application in C# that creates, loads, and queries a search index.
+Use the [Azure.Search.Documents (version 11) client library](/dotnet/api/overview/azure/search.documents-readme) to create a .NET Core console application in C# that creates, loads, and queries a search index.
You can [download the source code](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart/v11) to start with a finished project or follow the steps in this article to create your own.
Before you begin, have the following tools and
+ [Visual Studio](https://visualstudio.microsoft.com/downloads/), any edition. Sample code was tested on the free Community edition of Visual Studio 2019.
-When setting up your project, you will download the [Azure.Search.Documents NuGet package](https://www.nuget.org/packages/Azure.Search.Documents/).
+When setting up your project, you'll download the [Azure.Search.Documents NuGet package](https://www.nuget.org/packages/Azure.Search.Documents/).
Azure SDK for .NET conforms to [.NET Standard 2.0](/dotnet/standard/net-standard#net-implementation-support), which means .NET Framework 4.6.1 and .NET Core 2.0 as minimum requirements.
Assemble service connection information, and then start Visual Studio to create
### Copy a key and endpoint
-Calls to the service require a URL endpoint and an access key on every request. As a first step, find the API key and URL to add to your project. You will specify both values when creating the client in a later step.
+Calls to the service require a URL endpoint and an access key on every request. As a first step, find the API key and URL to add to your project. You'll specify both values when creating the client in a later step.
1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
-2. In **Settings** > **Keys**, get an admin key for full rights on the service, required if you are creating or deleting objects. There are two interchangeable primary and secondary keys. You can use either one.
+2. In **Settings** > **Keys**, get an admin key for full rights on the service, required if you're creating or deleting objects. There are two interchangeable primary and secondary keys. You can use either one.
![Get an HTTP endpoint and access key](media/search-get-started-rest/get-url-key.png "Get an HTTP endpoint and access key")
After the project is created, add the client library. The [Azure.Search.Document
1. In **Tools** > **NuGet Package Manager**, select **Manage NuGet Packages for Solution...**.
-1. Click **Browse**.
+1. Select **Browse**.
1. Search for `Azure.Search.Documents` and select version 11.0 or later.
-1. Click **Install** on the right to add the assembly to your project and solution.
+1. Select **Install** on the right to add the assembly to your project and solution.
### Create a search client
In this example, synchronous methods of the Azure.Search.Documents library are u
1. Add an empty class definition to your project: **Hotel.cs**
-1. Copy the following code into **Hotel.cs** to define the structure of a hotel document. Attributes on the field determine how it is used in an application. For example, the `IsFilterable` attribute must be assigned to every field that supports a filter expression.
+1. Copy the following code into **Hotel.cs** to define the structure of a hotel document. Attributes on the field determine how it's used in an application. For example, the `IsFilterable` attribute must be assigned to every field that supports a filter expression.
```csharp using System;
In this example, synchronous methods of the Azure.Search.Documents library are u
} ```
-1. Create two more classes: **Hotel.Methods.cs** and **Address.Methods.cs** for ToString() overrides. These classes are used to render search results in the console output. The contents of these classes are not provided in this article, but you can copy the code from [files in GitHub](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart/v11/AzureSearchQuickstart-v11).
+1. Create two more classes: **Hotel.Methods.cs** and **Address.Methods.cs** for ToString() overrides. These classes are used to render search results in the console output. The contents of these classes aren't provided in this article, but you can copy the code from [files in GitHub](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart/v11/AzureSearchQuickstart-v11).
1. In **Program.cs**, create a [SearchIndex](/dotnet/api/azure.search.documents.indexes.models.searchindex) object, and then call the [CreateIndex](/dotnet/api/azure.search.documents.indexes.searchindexclient.createindex) method to express the index in your search service. The index also includes a [SearchSuggester](/dotnet/api/azure.search.documents.indexes.models.searchsuggester) to enable autocomplete on the specified fields.
When uploading documents, you must use an [IndexDocumentsBatch](/dotnet/api/azur
You can get query results as soon as the first document is indexed, but actual testing of your index should wait until all documents are indexed.
-This section adds two pieces of functionality: query logic, and results. For queries, use the [Search](/dotnet/api/azure.search.documents.searchclient.search) method. This method takes search text (the query string) as well as other [options](/dotnet/api/azure.search.documents.searchoptions).
+This section adds two pieces of functionality: query logic, and results. For queries, use the [Search](/dotnet/api/azure.search.documents.searchclient.search) method. This method takes search text (the query string) and other [options](/dotnet/api/azure.search.documents.searchoptions).
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) class represents the results.
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
WriteDocuments(response); ```
-1. In the second query, search on a term, add a filter that selects documents where Rating is greater than 4, and then sort by Rating in descending order. Filter is a boolean expression that is evaluated over [IsFilterable](/dotnet/api/azure.search.documents.indexes.models.searchfield.isfilterable) fields in an index. Filter queries either include or exclude values. As such, there is no relevance score associated with a filter query.
+1. In the second query, search on a term, add a filter that selects documents where Rating is greater than 4, and then sort by Rating in descending order. Filter is a boolean expression that is evaluated over [IsFilterable](/dotnet/api/azure.search.documents.indexes.models.searchfield.isfilterable) fields in an index. Filter queries either include or exclude values. As such, there's no relevance score associated with a filter query.
```csharp Console.WriteLine("Query #2: Search on 'hotels', filter on 'Rating gt 4', sort by Rating in descending order...\n");
The [SearchResults](/dotnet/api/azure.search.documents.models.searchresults-1) c
The previous queries show multiple [ways of matching terms in a query](search-query-overview.md#types-of-queries): full-text search, filters, and autocomplete.
-Full text search and filters are performed using the [SearchClient.Search](/dotnet/api/azure.search.documents.searchclient.search) method. A search query can be passed in the `searchText` string, while a filter expression can be passed in the [Filter](/dotnet/api/azure.search.documents.searchoptions.filter) property of the [SearchOptions](/dotnet/api/azure.search.documents.searchoptions) class. To filter without searching, just pass `"*"` for the `searchText` parameter of the [Search](/dotnet/api/azure.search.documents.searchclient.search) method. To search without filtering, leave the `Filter` property unset, or do not pass in a `SearchOptions` instance at all.
+Full text search and filters are performed using the [SearchClient.Search](/dotnet/api/azure.search.documents.searchclient.search) method. A search query can be passed in the `searchText` string, while a filter expression can be passed in the [Filter](/dotnet/api/azure.search.documents.searchoptions.filter) property of the [SearchOptions](/dotnet/api/azure.search.documents.searchoptions) class. To filter without searching, just pass `"*"` for the `searchText` parameter of the [Search](/dotnet/api/azure.search.documents.searchclient.search) method. To search without filtering, leave the `Filter` property unset, or don't pass in a `SearchOptions` instance at all.
## Run the program
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Search Get Started Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-python.md
ms.devlang: python Previously updated : 06/11/2021 Last updated : 08/29/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
The following services and tools are required for this quickstart.
-* [Anaconda 3.x](https://www.anaconda.com/distribution/#download-section), providing Python 3.x and Jupyter Notebook.
+* Visual Studio Code with the Python extension (or equivalent tool), with Python 3.7 or later
-* [azure-search-documents package](https://pypi.org/project/azure-search-documents/)
+* [azure-search-documents package](https://pypi.org/project/azure-search-documents/) from the Azure SDK for Python
* [Create a search service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use the Free tier for this quickstart.
All requests require an api-key on every request sent to your service. Having a
## Connect to Azure Cognitive Search
-In this task, start Jupyter Notebook and verify that you can connect to Azure Cognitive Search. You'll do this by requesting a list of indexes from your service. On Windows with Anaconda3, you can use Anaconda Navigator to launch a notebook.
+In this task, start Jupyter Notebook and verify that you can connect to Azure Cognitive Search. You'll do this step by requesting a list of indexes from your service.
1. Create a new Python3 notebook.
In this task, start Jupyter Notebook and verify that you can connect to Azure Co
) ```
-1. In the second cell, input the request elements that will be constants on every request. Provide your search service name, admin API key, and query API key, copied in a previous step. This cell also sets up the clients you will use for specific operations: [SearchIndexClient](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexclient) to create an index, and [SearchClient](/python/api/azure-search-documents/azure.search.documents.searchclient) to query an index.
+1. In the second cell, input the request elements that will be constants on every request. Provide your search service name, admin API key, and query API key, copied in a previous step. This cell also sets up the clients you'll use for specific operations: [SearchIndexClient](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexclient) to create an index, and [SearchClient](/python/api/azure-search-documents/azure.search.documents.searchclient) to query an index.
```python
- service_name = "YOUR-SEARCH-SERIVCE-NAME"
+ service_name = "YOUR-SEARCH-SERVICE-NAME"
admin_key = "YOUR-SEARCH-SERVICE-ADMIN-API-KEY" index_name = "hotels-quickstart"
In this task, start Jupyter Notebook and verify that you can connect to Azure Co
Required elements of an index include a name, a fields collection, and a key. The fields collection defines the structure of a logical *search document*, used for both loading data and returning results.
-Each field has a name, type, and attributes that determine how the field is used (for example, whether it is full-text searchable, filterable, or retrievable in search results). Within an index, one of the fields of type `Edm.String` must be designated as the *key* for document identity.
+Each field has a name, type, and attributes that determine how the field is used (for example, whether it's full-text searchable, filterable, or retrievable in search results). Within an index, one of the fields of type `Edm.String` must be designated as the *key* for document identity.
This index is named "hotels-quickstart" and has the field definitions you see below. It's a subset of a larger [Hotels index](https://github.com/Azure-Samples/azure-search-sample-data/blob/master/hotels/Hotels_IndexDefinition.JSON) used in other walkthroughs. We trimmed it in this quickstart for brevity.
This step shows you how to query an index using the **search** method of the [se
print(" {}".format(facet)) ```
-1. In this example, look up a specific document based on its key. You would typically want to return a document when a user clicks on a document in a search result.
+1. In this example, look up a specific document based on its key. You would typically want to return a document when a user select on a document in a search result.
```python result = search_client.get_document(key="3")
This step shows you how to query an index using the **search** method of the [se
print("Category: {}".format(result["Category"])) ```
-1. In this example, we'll use the autocomplete function. This is typically used in a search box to help auto-complete potential matches as the user types into the search box.
+1. In this example, we'll use the autocomplete function. Autocomplete is typically used in a search box to provide potential matches as the user types into the search box.
When the index was created, a suggester named "sg" was also created as part of the request. A suggester definition specifies which fields can be used to find potential matches to suggester requests. In this example, those fields are 'Tags', 'Address/City', 'Address/Country'. To simulate auto-complete, pass in the letters "sa" as a partial string. The autocomplete method of [SearchClient](/python/api/azure-search-documents/azure.search.documents.searchclient) sends back potential term matches.
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Tutorial Multiple Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-multiple-data-sources.md
Previously updated : 01/23/2021 Last updated : 08/29/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
## Overview
-This tutorial uses the new client library, [Azure.Search.Documents](/dotnet/api/overview/azure/search), version 11.x, to create and run multiple indexers. In this tutorial, you'll set up two Azure data sources so that you can configure an indexer that pulls from both to populate a single search index. The two data sets must have a value in common to support the merge. In this sample, that field is an ID. As long as there is a field in common to support the mapping, an indexer can merge data from disparate resources: structured data from Azure SQL, unstructured data from Blob storage, or any combination of [supported data sources](search-indexer-overview.md#supported-data-sources) on Azure.
+This tutorial uses [Azure.Search.Documents](/dotnet/api/overview/azure/search) to create and run multiple indexers. In this tutorial, you'll set up two Azure data sources so that you can configure an indexer that pulls from both to populate a single search index. The two data sets must have a value in common to support the merge. In this sample, that field is an ID. As long as there's a field in common to support the mapping, an indexer can merge data from disparate resources: structured data from Azure SQL, unstructured data from Blob storage, or any combination of [supported data sources](search-indexer-overview.md#supported-data-sources) on Azure.
A finished version of the code in this tutorial can be found in the following project: * [multiple-data-sources/v11 (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/multiple-data-sources/v11)
-This tutorial has been updated to use the Azure.Search.Documents (version 11) package. For an earlier version of the .NET SDK, see [Microsoft.Azure.Search (version 10) code sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/multiple-data-sources/v10) on GitHub.
+For an earlier version of the .NET SDK, see [Microsoft.Azure.Search (version 10) code sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/multiple-data-sources/v10) on GitHub.
## Prerequisites
This tutorial has been updated to use the Azure.Search.Documents (version 11) pa
+ [Azure Storage](../storage/common/storage-account-create.md) + [Visual Studio](https://visualstudio.microsoft.com/) + [Azure Cognitive Search (version 11.x) NuGet package](https://www.nuget.org/packages/Azure.Search.Documents/)
-+ [Create](search-create-service-portal.md) or [find an existing search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices)
++ [Azure Cognitive Search](search-create-service-portal.md)
-> [!Note]
+> [!NOTE]
> You can use the free service for this tutorial. A free search service limits you to three indexes, three indexers, and three data sources. This tutorial creates one of each. Before starting, make sure you have room on your service to accept the new resources. ## 1 - Create services
This sample uses two small sets of data that describe seven fictional hotels. On
:::image type="content" source="media/tutorial-multiple-data-sources/cosmos-add-container.png" alt-text="Add container" border="false":::
-1. Select **Items** under **hotels**, and then click **Upload Item** on the command bar. Navigate to and then select the file **cosmosdb/HotelsDataSubset_CosmosDb.json** in the project folder.
+1. Select **Items** under **hotels**, and then select **Upload Item** on the command bar. Navigate to and then select the file **cosmosdb/HotelsDataSubset_CosmosDb.json** in the project folder.
:::image type="content" source="media/tutorial-multiple-data-sources/cosmos-upload.png" alt-text="Upload to Azure Cosmos DB collection" border="false"::: 1. Use the Refresh button to refresh your view of the items in the hotels collection. You should see seven new database documents listed.
-1. Copy a connection string from the **Keys** page into Notepad. You will need this for **appsettings.json** in a later step. If you did not use the suggested database name "hotel-rooms-db", copy the database name as well.
+1. Copy a connection string from the **Keys** page into Notepad. You'll need this value for **appsettings.json** in a later step. If you didn't use the suggested database name "hotel-rooms-db", copy the database name as well.
### Azure Blob Storage
-1. Sign in to the [Azure portal](https://portal.azure.com), navigate to your Azure storage account, click **Blobs**, and then click **+ Container**.
+1. Sign in to the [Azure portal](https://portal.azure.com), navigate to your Azure storage account, select **Blobs**, and then select **+ Container**.
1. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md) named **hotel-rooms** to store the sample hotel room JSON files. You can set the Public Access Level to any of its valid values. :::image type="content" source="media/tutorial-multiple-data-sources/blob-add-container.png" alt-text="Create a blob container" border="false":::
-1. After the container is created, open it and select **Upload** on the command bar. Navigate to the folder containing the sample files. Select all of them and then click **Upload**.
+1. After the container is created, open it and select **Upload** on the command bar. Navigate to the folder containing the sample files. Select all of them and then select **Upload**.
:::image type="content" source="media/tutorial-multiple-data-sources/blob-upload.png" alt-text="Upload files" border="false":::
-1. Copy the the storage account name and a connection string from the **Access Keys** page into Notepad. You will need both values for **appsettings.json** in a later step.
+1. Copy the storage account name and a connection string from the **Access Keys** page into Notepad. You'll need both values for **appsettings.json** in a later step.
### Azure Cognitive Search
-The third component is Azure Cognitive Search, which you can [create in the portal](search-create-service-portal.md).
+The third component is Azure Cognitive Search, which you can [create in the portal](search-create-service-portal.md) or [find an existing search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in your Azure resources.
### Copy an admin api-key and URL for Azure Cognitive Search
-To authenticate to your search service, you will need the service URL and an access key.
+To authenticate to your search service, you'll need the service URL and an access key.
1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
Having a valid key establishes trust, on a per request basis, between the applic
1. Start Visual Studio and in the **Tools** menu, select **NuGet Package Manager** and then **Manage NuGet Packages for Solution...**.
-1. In the **Browse** tab, find and then install **Azure.Search.Documents** (version 11.0 or later). You will have to click through additional dialogs to complete the installation.
+1. In the **Browse** tab, find and then install **Azure.Search.Documents** (version 11.0 or later).
:::image type="content" source="media/tutorial-csharp-create-first-app/azure-search-nuget-azure.png" alt-text="Using NuGet to add Azure libraries" border="false":::
When indexing data from multiple data sources, make sure each incoming row or do
It often requires some up-front planning to identify a meaningful document key for your index, and make sure it exists in both data sources. In this demo, the `HotelId` key for each hotel in Cosmos DB is also present in the rooms JSON blobs in Blob storage.
-Azure Cognitive Search indexers can use field mappings to rename and even reformat data fields during the indexing process, so that source data can be directed to the correct index field. For example, in Cosmos DB, the hotel identifier is called **`HotelId`**. But in the JSON blob files for the hotel rooms, the hotel identifier is named **`Id`**. The program handles this by mapping the **`Id`** field from the blobs to the **`HotelId`** key field in the indexer.
+Azure Cognitive Search indexers can use field mappings to rename and even reformat data fields during the indexing process, so that source data can be directed to the correct index field. For example, in Cosmos DB, the hotel identifier is called **`HotelId`**. But in the JSON blob files for the hotel rooms, the hotel identifier is named **`Id`**. The program handles this discrepancy by mapping the **`Id`** field from the blobs to the **`HotelId`** key field in the indexer.
> [!NOTE] > In most cases, auto-generated document keys, such as those created by default by some indexers, do not make good document keys for combined indexes. In general you will want to use a meaningful, unique key value that already exists in, or can be easily added to, your data sources.
This simple C#/.NET console app performs the following tasks:
This sample program uses [CreateIndexAsync](/dotnet/api/azure.search.documents.indexes.searchindexclient.createindexasync) to define and create an Azure Cognitive Search index. It takes advantage of the [FieldBuilder](/dotnet/api/azure.search.documents.indexes.fieldbuilder) class to generate an index structure from a C# data model class.
-The data model is defined by the Hotel class, which also contains references to the Address and Room classes. The FieldBuilder drills down through multiple class definitions to generate a complex data structure for the index. Metadata tags are used to define the attributes of each field, such as whether it is searchable or sortable.
+The data model is defined by the Hotel class, which also contains references to the Address and Room classes. The FieldBuilder drills down through multiple class definitions to generate a complex data structure for the index. Metadata tags are used to define the attributes of each field, such as whether it's searchable or sortable.
The program will delete any existing index of the same name before creating the new one, in case you want to run this example more than once.
-The following snippets from the **Hotel.cs** file shows single fields, followed by a reference to another data model class, Room[], which in turn is defined in **Room.cs** file (not shown).
+The following snippets from the Hotel.cs file show single fields, followed by a reference to another data model class, Room[], which in turn is defined in **Room.cs** file (not shown).
```csharp . . .
Blob storage indexers can use [IndexingParameters](/dotnet/api/azure.search.docu
This example defines a schedule for the indexer, so that it will run once per day. You can remove the schedule property from this call if you don't want the indexer to automatically run again in the future. ```csharp
-// Map the Id field in the Room documents to the HotelId key field in the index
-List<FieldMapping> map = new List<FieldMapping> {
- new FieldMapping("Id")
- {
- TargetFieldName = "HotelId"
- }
-};
- IndexingParameters parameters = new IndexingParameters(); parameters.Configuration.Add("parsingMode", "json");
SearchIndexer blobIndexer = new SearchIndexer(
Schedule = new IndexingSchedule(TimeSpan.FromDays(1)) };
+// Map the Id field in the Room documents to the HotelId key field in the index
blobIndexer.FieldMappings.Add(new FieldMapping("Id") { TargetFieldName = "HotelId" }); // Reset the indexer if it already exists
In Azure portal, open the search service **Overview** page, and find the **hotel
:::image type="content" source="media/tutorial-multiple-data-sources/index-list.png" alt-text="List of Azure Cognitive Search indexes" border="false":::
-Click on the hotel-rooms-sample index in the list. You will see a Search Explorer interface for the index. Enter a query for a term like "Luxury". You should see at least one document in the results, and this document should show a list of room objects in its rooms array.
+Select on the hotel-rooms-sample index in the list. You'll see a Search Explorer interface for the index. Enter a query for a term like "Luxury". You should see at least one document in the results, and this document should show a list of room objects in its rooms array.
## Reset and rerun
service-bus-messaging Service Bus Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-troubleshooting-guide.md
Title: Troubleshooting guide for Azure Service Bus | Microsoft Docs description: Learn about troubleshooting tips and recommendations for a few issues that you may see when using Azure Service Bus. Previously updated : 07/29/2022 Last updated : 08/29/2022
The following steps may help you with troubleshooting connectivity/certificate/t
</Detail> </Error> ```-- Run the following command to check if any port is blocked on the firewall. Ports used are 443 (HTTPS), 5671 (AMQP) and 9354 (Net Messaging/SBMP). Depending on the library you use, other ports are also used. Here is the sample command that check whether the 5671 port is blocked.
+- Run the following command to check if any port is blocked on the firewall. Ports used are 443 (HTTPS), 5671 and 5672 (AMQP) and 9354 (Net Messaging/SBMP). Depending on the library you use, other ports are also used. Here is the sample command that check whether the 5671 port is blocked. C
```powershell tnc <yournamespacename>.servicebus.windows.net -port 5671
service-bus-messaging Service Bus Tutorial Topics Subscriptions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-tutorial-topics-subscriptions-portal.md
To run the code, follow these steps:
git clone https://github.com/Azure/azure-service-bus.git ```
-2. Navigate to the sample folder `azure-service-bus\samples\DotNet\Azure.Messaging.ServiceBus\BasicSendReceiveTutorialwithFilters`.
+2. Navigate to the sample folder `azure-service-bus\samples\DotNet\Azure.Messaging.ServiceBus\BasicSendReceiveTutorialWithFilters`.
3. Obtain the connection string you copied to Notepad in the Obtain the management credentials section of this tutorial. You also need the name of the topic you created in the previous section.
To run the code, follow these steps:
dotnet build ```
-5. Navigate to the `BasicSendReceiveTutorialwithFilters\bin\Debug\netcoreapp3.1` folder.
+5. Navigate to the `BasicSendReceiveTutorialWithFilters\bin\Debug\netcoreapp3.1` folder.
6. Type the following command to run the program. Be sure to replace `myConnectionString` with the value you previously obtained, and `myTopicName` with the name of the topic you created: ```shell
- dotnet BasicSendReceiveTutorialwithFilters.dll -ConnectionString "myConnectionString" -TopicName "myTopicName"
+ dotnet --roll-forward Major BasicSendReceiveTutorialWithFilters.dll -ConnectionString "myConnectionString" -TopicName "myTopicName"
``` 7. Follow the instructions in the console to select filter creation first. Part of creating filters is to remove the default filters. When you use PowerShell or CLI you don't need to remove the default filter, but if you do it in code, you must remove them. The console commands 1 and 3 help you manage the filters on the subscriptions you previously created:
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft
**Operating system** | **Details** | Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b)
-CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5
+CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5, 8.6
Ubuntu 14.04 LTS Server | Includes support for all 14.04.*x* versions; [Supported kernel versions](#supported-ubuntu-kernel-versions-for-azure-virtual-machines); Ubuntu 16.04 LTS Server | Includes support for all 16.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. Ubuntu 18.04 LTS Server | Includes support for all 18.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal.
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Release** | **Mobility service version** | **Kernel version** | | | |
+14.04 LTS | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 14.04 LTS kernels supported in this release. |
14.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new 14.04 LTS kernels supported in this release. |
-14.04 LTS | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new 14.04 LTS kernels supported in this release. |
-14.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
|||
+16.04 LTS | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 16.04 LTS kernels supported in this release. |
16.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1112-azure, 4.15.0-1113-azure | 16.04 LTS | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.4.0-21-generic to 4.4.0-206-generic <br/>4.8.0-34-generic to 4.8.0-58-generic <br/>4.10.0-14-generic to 4.10.0-42-generic <br/>4.11.0-13-generic to 4.11.0-14-generic <br/>4.13.0-16-generic to 4.13.0-45-generic <br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
-16.04 LTS | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new 16.04 LTS kernels supported in this release. |
-16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure|
|||
-18.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.15.0-1139-azure </br> 4.15.0-1142-azure </br> 4.15.0-1145-azure </br> 4.15.0-180-generic </br> 4.15.0-184-generic </br> 4.15.0-187-generic </br> 4.15.0-188-generic </br> 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic |
-18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1134-azure </br> 4.15.0-1136-azure </br> 4.15.0-173-generic </br> 4.15.0-175-generic </br> 5.4.0-105-generic </br> 5.4.0-1073-azure </br> 5.4.0-1074-azure </br> 5.4.0-107-generic </br> 5.4.0-109-generic </br> 5.4.0-110-generic |
+18.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 18.04 LTS kernels supported in this release. |
+18.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.15.0-1139-azure </br> 4.15.0-1142-azure </br> 4.15.0-1145-azure </br> 4.15.0-1146-azure </br> 4.15.0-180-generic </br> 4.15.0-184-generic </br> 4.15.0-187-generic </br> 4.15.0-188-generic </br> 4.15.0-189-generic </br> 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic |
+18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1134-azure </br> 4.15.0-1136-azure </br> 4.15.0-1137-azure </br> 4.15.0-1138-azure </br> 4.15.0-173-generic </br> 4.15.0-175-generic </br> 4.15.0-176-generic </br> 4.15.0-177-generic </br> 5.4.0-105-generic </br> 5.4.0-1073-azure </br> 5.4.0-1074-azure </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-107-generic </br> 5.4.0-109-generic </br> 5.4.0-110-generic |
18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-92-generic </br> 4.15.0-166-generic </br> 4.15.0-1129-azure </br> 5.4.0-1065-azure </br> 4.15.0-1130-azure </br> 4.15.0-167-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 4.15.0-1131-azure </br> 4.15.0-169-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure | 18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1126-azure </br> 4.15.0-1125-azure </br> 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-162-generic </br> 4.15.0-161-generic </br> 4.15.0-156-generic </br> 5.4.0-1061-azure to 5.4.0-1063-azure </br> 5.4.0-90-generic </br> 5.4.0-89-generic </br> 9.46 hotfix patch** </br> 4.15.0-1127-azure </br> 4.15.0-163-generic </br> 5.4.0-1064-azure </br> 5.4.0-91-generic |
-18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic |
|||
-20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-1074-azure </br> 5.4.0-107-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-109-generic </br> 5.4.0-110-generic |
+20.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic |
+20.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 20.04 LTS kernels supported in this release. |
+20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-1074-azure </br> 5.4.0-107-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-109-generic </br> 5.4.0-110-generic </br> 5.11.0-1007-azure </br> 5.11.0-1012-azure </br> 5.11.0-1013-azure </br> 5.11.0-1015-azure </br> 5.11.0-1017-azure </br> 5.11.0-1019-azure </br> 5.11.0-1020-azure </br> 5.11.0-1021-azure </br> 5.11.0-1022-azure </br> 5.11.0-1023-azure </br> 5.11.0-1025-azure </br> 5.11.0-1027-azure </br> 5.11.0-1028-azure </br> 5.11.0-22-generic </br> 5.11.0-25-generic </br> 5.11.0-27-generic </br> 5.11.0-34-generic </br> 5.11.0-36-generic </br> 5.11.0-37-generic </br> 5.11.0-38-generic </br> 5.11.0-40-generic </br> 5.11.0-41-generic </br> 5.11.0-43-generic </br> 5.11.0-44-generic </br> 5.11.0-46-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic |
20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1065-azure </br> 5.4.0-92-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure | 20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-84-generic </br> 5.4.0-1058-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-1063-azure </br> 5.4.0-89-generic </br> 5.4.0-90-generic </br> 9.46 hotfix patch** </br> 5.4.0-1064-azure </br> 5.4.0-91-generic |
-20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic |
-20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-60-generic </br> 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
> [!Note] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
-> [!Note]
-> For Ubuntu 20.04, we had initially rolled out support for kernels 5.8.* but we have since found issues with support for this kernel and hence have redacted these kernels from our support statement for the time being.
- #### Supported Debian kernel versions for Azure virtual machines **Release** | **Mobility service version** | **Kernel version** | | | |
+Debian 7 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new Debian 7 kernels supported in this release. |
Debian 7 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 7 kernels supported in this release. | Debian 7 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 7 kernels supported in this release. | Debian 7 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 7 kernels supported in this release. |
-Debian 7 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new Debian 7 kernels supported in this release. |
-Debian 7 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
|||
+Debian 8 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new Debian 8 kernels supported in this release. |
Debian 8 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 8 kernels supported in this release. | Debian 8 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 8 kernels supported in this release. | Debian 8 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 8 kernels supported in this release. |
-Debian 8 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new Debian 8 kernels supported in this release. |
-Debian 8 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 |
|||
+Debian 9.1 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 9.1 kernels supported in this release. |
Debian 9.1 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.9.0-19-amd64 Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.9.0-18-amd64 </br> 4.19.0-0.bpo.19-amd64 </br> 4.19.0-0.bpo.17-cloud-amd64 to 4.19.0-0.bpo.19-cloud-amd64 Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-16-amd64, 4.9.0-17-amd64 Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 9.1 kernels supported in this release.
-Debian 9.1 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64
|||
+Debian 10 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 10 kernels supported in this release.
Debian 10 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.19.0-21-amd64 </br> 4.19.0-21-cloud-amd64 </br> 5.10.0-0.bpo.15-amd64 </br> 5.10.0-0.bpo.15-cloud-amd64 Debian 10 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-20-amd64 </br> 4.19.0-20-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64, 5.8.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.2-amd64, 5.9.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.5-amd64, 5.9.0-0.bpo.5-cloud-amd64, 5.10.0-0.bpo.7-amd64, 5.10.0-0.bpo.7-cloud-amd64, 5.10.0-0.bpo.9-amd64, 5.10.0-0.bpo.9-cloud-amd64, 5.10.0-0.bpo.11-amd64, 5.10.0-0.bpo.11-cloud-amd64, 5.10.0-0.bpo.12-amd64, 5.10.0-0.bpo.12-cloud-amd64 | Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new Debian 10 kernels supported in this release. Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | No new Debian 10 kernels supported in this release.
-Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64
> [!Note] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.12.14-16.100-azure:5 |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.12.14-122.110-default:5 </br> 4.12.14-122.113-default:5 </br> 4.12.14-122.116-default:5 </br> 4.12.14-122.121-default:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 12 Azure kernels supported in this release. |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>4.12.14-16.100-azure:5 </br> 4.12.14-16.103-azure:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.12.14-16.94-azure:5 </br> 4.12.14-16.97-azure:5 </br> 4.12.14-122.110-default:5 </br> 4.12.14-122.113-default:5 </br> 4.12.14-122.116-default:5 </br> 4.12.14-122.121-default:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.85-azure:5 </br> 4.12.14-122.106-default:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-122.110-default:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.46](https://support.microsoft.com/en-us/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | No new SLES 12 kernels supported in this release. |
#### Supported SUSE Linux Enterprise Server 15 kernel versions for Azure virtual machines **Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.3.18-59.5-default:3 </br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 </br>
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 15 Azure kernels supported in this release. |
+SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>5.3.18-150300.38.69-azure:3 </br>
+SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-150300.38.47-azure:3 </br> 5.3.18-150300.38.50-azure:3 </br> 5.3.18-150300.38.53-azure:3 </br> 5.3.18-150300.38.56-azure:3 </br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 </br> 5.3.18-36-azure:3 </br> 5.3.18-38.11-azure:3 </br> 5.3.18-38.14-azure:3 </br> 5.3.18-38.17-azure:3 </br> 5.3.18-38.22-azure:3 </br> 5.3.18-38.25-azure:3 </br> 5.3.18-38.28-azure:3 </br> 5.3.18-38.31-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-38.3-azure:3 </br> 5.3.18-38.8-azure:3 |
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-59.40-default:3 </br> SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure </br> 5.3.18-18.72-azure </br> 5.3.18-18.75-azure
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
> [!Note] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
site-recovery Physical Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-azure-architecture.md
The following table and graphic provides a high-level view of the components use
| **Component** | **Requirement** | **Details** | | | | | | **Azure** | An Azure subscription and an Azure network. | Replicated data from on-premises physical machines is stored in Azure managed disks. Azure VMs are created with the replicated data when you run a failover from on-premises to Azure. The Azure VMs connect to the Azure virtual network when they're created. |
-| **Process server** | Installed by default together with the configuration server. | Acts as a replication gateway. Receives replication data, optimizes it with caching, compression, and encryption, and sends it to Azure storage.<br/><br/> The process server also installs the Mobility service on servers you want to replicate.<br/><br/> As your deployment grows, you can add additional, separate process servers to handle larger volumes of replication traffic. |
-| **Master target server** | Installed by default together with the configuration server. | Handles replication data during fail back from Azure.<br/><br/> For large deployments, you can add an additional, separate master target server for failback. |
+| **Configuration server machine** | A single on-premises machine. We recommend that you run it as a VMware VM that can be deployed from a downloaded OVF template.<br/><br/> The machine runs all on-premises Site Recovery components, which include the configuration server, process server, and master target server. | **Configuration server**: Coordinates communications between on-premises and Azure, and manages data replication.<br/><br/> **Process server**: Installed by default on the configuration server. It receives replication data; optimizes it with caching, compression, and encryption; and sends it to Azure Storage. The process server also installs Azure Site Recovery Mobility Service on VMs you want to replicate, and performs automatic discovery of on-premises machines. As your deployment grows, you can add additional, separate process servers to handle larger volumes of replication traffic.<br/><br/> **Master target server**: Installed by default on the configuration server. It handles replication data during failback from Azure. For large deployments, you can add an additional, separate master target server for failback. |
| **Replicated servers** | The Mobility service is installed on each server you replicate. | We recommend you allow automatic installation from the process server. Or, you can install the service manually, or use an automated deployment method such as Configuration Manager. | **Physical to Azure architecture**
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Windows 7 with SP1 64-bit | Supported from [Update rollup 36](https://support.mi
| Linux | Only 64-bit system is supported. 32-bit system isn't supported.<br/><br/>Every Linux server should have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) installed. It is required to boot the server in Azure after test failover/failover. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br/><br/> Site Recovery orchestrates failover to run Linux servers in Azure. However Linux vendors might limit support to only distribution versions that haven't reached end-of-life.<br/><br/> On Linux distributions, only the stock kernels that are part of the distribution minor version release/update are supported.<br/><br/> Upgrading protected machines across major Linux distribution versions isn't supported. To upgrade, disable replication, upgrade the operating system, and then enable replication again.<br/><br/> [Learn more](https://support.microsoft.com/help/2941892/support-for-linux-and-open-source-technology-in-azure) about support for Linux and open-source technology in Azure.<br/><br/> Chained IO is not supported by Site Recovery. Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure.
-Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure.
+Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5, 8.6 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure.
Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions) Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions); Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 is not supported.), Debian 10 [(Review supported kernel versions)](#debian-kernel-versions) SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 is not supported. To upgrade, disable replication and re-enable after the upgrade. <br/>|
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-14.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
+14.04 LTS | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
|||
-16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.4.0-21-generic to 4.4.0-210-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic, 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-142-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1113-azure </br> 4.15.0-101-generic to 4.15.0-107-generic |
+16.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.4.0-21-generic to 4.4.0-210-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic, 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-142-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1113-azure </br> 4.15.0-101-generic to 4.15.0-107-generic |
|||
+18.04 |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.15.0-1146-azure </br> 4.15.0-189-generic </br> 5.4.0-1086-azure </br> 5.4.0-122-generic </br>
18.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.15.0-1139-azure </br> 4.15.0-1142-azure </br> 4.15.0-1145-azure </br> 4.15.0-180-generic </br> 4.15.0-184-generic </br> 4.15.0-187-generic </br> 4.15.0-188-generic </br> 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 18.04 LTS |[9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1009-azure to 4.15.0-1138-azure </br> 4.15.0-101-generic to 4.15.0-177-generic </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.3.0-19-generic to 5.3.0-76-generic </br> 5.4.0-1020-azure to 5.4.0-1078-azure </br> 5.4.0-37-generic to 5.4.0-110-generic | 18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.15.0-1126-azure </br> 4.15.0-1127-azure </br> 4.15.0-1129-azure </br> 4.15.0-162-generic </br> 4.15.0-163-generic </br> 4.15.0-166-generic </br> 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic | 18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1123-azure </br> 4.15.0-1124-azure </br> 4.15.0-1125-azure</br> 4.15.0-156-generic </br> 4.15.0-158-generic </br> 4.15.0-159-generic </br> 4.15.0-161-generic </br> 5.4.0-1058-azure </br> 5.4.0-1059-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-84-generic </br> 5.4.0-86-generic </br> 5.4.0-87-generic </br> 5.4.0-89-generic |
-18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic |
|||
+20.04 LTS|[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic |
+20.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 20.04 LTS kernels supported in this release |
20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-26-generic to 5.4.0-110-generic </br> 5.4.0-1010-azure to 5.4.0-1078-azure </br> 5.8.0-1033-azure to 5.8.0-1043-azure </br> 5.8.0-23-generic to 5.8.0-63-generic </br> 5.11.0-22-generic to 5.11.0-46-generic </br> 5.11.0-1007-azure to 5.11.0-1028-azure | 20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic | 20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-1058-azure </br> 5.4.0-1059-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-84-generic </br> 5.4.0-86-generic </br> 5.4.0-88-generic </br> 5.4.0-89-generic |
-20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure |
-20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 5.4.0-26-generic to 5.4.0-80 </br> 5.4.0-1010-azure to 5.4.0-1048-azure </br> 5.4.0-81-generic </br> 5.4.0-1056-azure |
-> [!Note]
-> - For Ubuntu 20.04, we had initially rolled out support for kernels 5.8. But we have since found issues with support for this kernel and hence have redacted these kernels from our support statement for the time being.
### Debian kernel versions **Supported release** | **Mobility service version** | **Kernel version** | | | |
-Debian 7 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
+Debian 7 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
|||
-Debian 8 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.12-amd64 |
+Debian 8 |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39), [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.12-amd64 |
|||
+Debian 9.1 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 9.1 kernels supported in this release|
Debian 9.1 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.9.0-19-amd64 </br> Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.9.0-17-amd64 to 4.9.0-19-amd64 </br> 4.19.0-0.bpo.19-cloud-amd64 </br> Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-17-amd64 </br> Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64
-Debian 9.1 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64 </br>
|||
+Debian 10 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 10 kernels supported in this release|
Debian 10 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.19.0-21-amd64 </br> 4.19.0-21-cloud-amd64 </br> 5.10.0-0.bpo.15-amd64 </br> 5.10.0-0.bpo.15-cloud-amd64
-Debian 10 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-19-cloud-amd64, 4.19.0-20-cloud-amd64 </br> 4.19.0-19-amd64, 4.19.0-20-amd64
+Debian 10 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-19-cloud-amd64, 4.19.0-20-cloud-amd64 </br> 4.19.0-19-amd64, 4.19.0-20-amd64 </br> 5.8.0-0.bpo.2-amd64, 5.8.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.2-amd64, 5.9.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.5-amd64, 5.9.0-0.bpo.5-cloud-amd64, 5.10.0-0.bpo.7-amd64, 5.10.0-0.bpo.7-cloud-amd64, 5.10.0-0.bpo.9-amd64, 5.10.0-0.bpo.9-cloud-amd64, 5.10.0-0.bpo.11-amd64, 5.10.0-0.bpo.11-cloud-amd64, 5.10.0-0.bpo.12-amd64, 5.10.0-0.bpo.12-cloud-amd64 |
Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new kernels supported. Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 <br/> 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64
-Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 <br/> 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64
### SUSE Linux Enterprise Server 12 supported kernel versions **Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.103-azure:5 |
SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.100-azure:5 |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.85-azure:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-122.106-default:5 </br> 4.12.14-122.110-default:5 </br> 4.12.14-122.113-default:5 </br> 4.12.14-122.116-default:5 </br> 4.12.14-122.12-default:5 </br> 4.12.14-122.121-default:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.85-azure:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-16.94-azure:5 </br> 4.12.14-16.97-azure:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure </br> 4.12.14-122.103-default </br> 4.12.14-122.98-default5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure </br> 4.12.14-16.68-azure </br> 4.12.14-16.73-azure </br> 4.12.14-16.76-azure </br> 4.12.14-122.88-default </br> 4.12.14-122.91-default |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.45](https://support.microsoft.com/en-us/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | All [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure </br> 4.12.14-16.68-azure </br> 4.12.14-16.76-azure |
### SUSE Linux Enterprise Server 15 supported kernel versions **Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.69-azure:3 |
SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 |
-SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-38.34-azure:3 to 5.3.18-59.40-default:3 </br> 5.3.18-150300.59.43-default:3 to 5.3.18-150300.59.68-default:3 </br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 |
+SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-150300.38.47-azure:3 </br> 5.3.18-150300.38.50-azure:3 </br> 5.3.18-150300.38.53-azure:3 </br> 5.3.18-150300.38.56-azure:3 </br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 </br> 5.3.18-36-azure:3 </br> 5.3.18-38.11-azure:3 </br> 5.3.18-38.14-azure:3 </br> 5.3.18-38.17-azure:3 </br> 5.3.18-38.22-azure:3 </br> 5.3.18-38.25-azure:3 </br> 5.3.18-38.28-azure:3 </br> 5.3.18-38.31-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-38.3-azure:3 </br> 5.3.18-38.8-azure:3 </br> |
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-18.72-azure: </br> 5.3.18-18.75-azure: </br> 5.3.18-24.93-default </br> 5.3.18-24.96-default </br> 5.3.18-36-azure </br> 5.3.18-38.11-azure </br> 5.3.18-38.14-azure </br> 5.3.18-38.17-azure </br> 5.3.18-38.22-azure </br> 5.3.18-38.25-azure </br> 5.3.18-38.28-azure </br> 5.3.18-38.3-azure </br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default | SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.66-azure </br> 5.3.18-18.69-azure </br> 5.3.18-24.83-default </br> 5.3.18-24.86-default |
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/en-us/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure
+ ## Linux file systems/guest storage
spring-apps Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-mysql.md
Title: "Quickstart - Integrate with Azure Database for MySQL" description: Explains how to provision and prepare an Azure Database for MySQL instance, and then configure Pet Clinic on Azure Spring Apps to use it as a persistent database with only one command.--++ Previously updated : 10/15/2021- Last updated : 08/28/2022+ # Quickstart: Integrate Azure Spring Apps with Azure Database for MySQL
Pet Clinic, as deployed in the default configuration [Quickstart: Build and depl
## Prerequisites
-* [MySQL CLI is installed](http://dev.mysql.com/downloads/mysql/)
-
-## Variables preparation
-
-We will use the following values. Save them in a text file or environment variables to avoid errors. The password should be at least 8 characters long and contain at least one English uppercase letter, one English lowercase letter, one number, and one non-alphanumeric character (!, $, #, %, and so on.).
-
-```bash
-export RESOURCE_GROUP=<resource-group-name> # customize this
-export MYSQL_SERVER_NAME=<mysql-server-name> # customize this
-export MYSQL_SERVER_FULL_NAME=${MYSQL_SERVER_NAME}.mysql.database.azure.com
-export MYSQL_SERVER_ADMIN_NAME=<admin-name> # customize this
-export MYSQL_SERVER_ADMIN_LOGIN_NAME=${MYSQL_SERVER_ADMIN_NAME}\@${MYSQL_SERVER_NAME}
-export MYSQL_SERVER_ADMIN_PASSWORD=<password> # customize this
-export MYSQL_DATABASE_NAME=petclinic
-```
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
## Prepare an Azure Database for MySQL instance
-1. If you didn't run the following commands in the previous quickstarts, set the CLI defaults.
+1. Create an Azure Database for MySQL flexible server using the [az mysql flexible-server create](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-create) command. Replace the placeholders `<database-name>`, `<resource-group-name>`, `<MySQL-flexible-server-name>`, `<admin-username>`, and `<admin-password>` with a name for your new database, the name of your resource group, a name for your new server, and an admin username and password.
- ```azcli
- az configure --defaults group=<resource group name> spring-cloud=<service name>
+ ```azurecli-interactive
+ az mysql flexible-server create \
+ --resource-group <resource-group-name> \
+ --name <MySQL-flexible-server-name> \
+ --database-name <database-name> \
+ --admin-user <admin-username> \
+ --admin-password <admin-password>
```
-1. Create an Azure Database for MySQL server.
+ > [!NOTE]
+ > Standard_B1ms SKU is used by default. Refer to [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/) for pricing details.
- ```azcli
- az mysql server create --resource-group ${RESOURCE_GROUP} \
- --name ${MYSQL_SERVER_NAME} \
- --admin-user ${MYSQL_SERVER_ADMIN_NAME} \
- --admin-password ${MYSQL_SERVER_ADMIN_PASSWORD} \
- --sku-name GP_Gen5_2 \
- --ssl-enforcement Disabled \
- --version 5.7
- ```
+ > [!TIP]
+ > Password should be at least eight characters long and contain at least one English uppercase letter, one English lowercase letter, one number, and one non-alphanumeric character (!, $, #, %, and so on.).
-1. Allow access from Azure resources.
+1. A CLI prompt asks if you want to enable access to your IP. Enter `Y` to confirm.
- ```azcli
- az mysql server firewall-rule create --name allAzureIPs \
- --server ${MYSQL_SERVER_NAME} \
- --resource-group ${RESOURCE_GROUP} \
- --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
- ```
+## Connect your application to the MySQL database
-1. Allow access from your dev machine for testing.
+Use [Service Connector](../service-connector/overview.md) to connect the app hosted in Azure Spring Apps to your MySQL database.
- ```azcli
- az mysql server firewall-rule create --name devMachine \
- --server ${MYSQL_SERVER_NAME} \
- --resource-group ${RESOURCE_GROUP} \
- --start-ip-address <ip-address-of-your-dev-machine> \
- --end-ip-address <ip-address-of-your-dev-machine>
- ```
+> [!NOTE]
+> The service binding feature in Azure Spring Apps is being deprecated in favor of Service Connector.
-1. Increase connection timeout.
+### [Azure CLI](#tab/azure-cli)
- ```azcli
- az mysql server configuration set --name wait_timeout \
- --resource-group ${RESOURCE_GROUP} \
- --server ${MYSQL_SERVER_NAME} --value 2147483
- ```
+1. If you're using Service Connector for the first time, start by running the command [az provider register](/cli/azure/provider#az-provider-register) to register the Service Connector resource provider.
-1. Create database in the MySQL server and set corresponding settings.
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ServiceLinker
+ ```
- ```sql
- // SUBSTITUTE values
- mysql -u ${MYSQL_SERVER_ADMIN_LOGIN_NAME} \
- -h ${MYSQL_SERVER_FULL_NAME} -P 3306 -p
+1. Run the `az spring connection create` command to create a service connection between Azure Spring Apps and the Azure MySQL database. Replace the placeholders below with your own information.
- Enter password:
- Welcome to the MySQL monitor. Commands end with ; or \g.
- Your MySQL connection id is 64379
- Server version: 5.6.39.0 MySQL Community Server (GPL)
+ | Setting | Description |
+ ||-|
+ | `--resource-group` | The name of the resource group that contains the app hosted by Azure Spring Apps. |
+ | `--service` | The name of the Azure Spring Apps resource. |
+ | `--app` | The name of the application hosted by Azure Spring Apps that connects to the target service. |
+ | `--target-resource-group` | The name of the resource group with the storage account. |
+ | `--server` | The MySQL server you want to connect to |
+ | `--database` | The name of the database you created earlier. |
+ | `--secret name` | The MySQL server username. |
+ | `--secret` | The MySQL server password. |
- Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
+ ```azurecli-interactive
+ az spring connection create mysql-flexible \
+ --resource-group <Azure-Spring-Apps-resource-group-name> \
+ --service <Azure-Spring-Apps-resource-name> \
+ --app <app-name> \
+ --target-resource-group <mySQL-server-resource-group> \
+ --server <server-name> \
+ --database <database-name> \
+ --secret name=<username> secret=<secret>
+ ```
- Oracle is a registered trademark of Oracle Corporation and/or its
- affiliates. Other names may be trademarks of their respective
- owners.
+ > [!TIP]
+ > If the `az spring` command isn't recognized by the system, check that you have installed the Azure Spring Apps extension by running `az extension add --name spring`.
- Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
+### [Portal](#tab/azure-portal)
- mysql> CREATE DATABASE petclinic;
- Query OK, 1 row affected (0.10 sec)
+1. In the Azure portal, type the name of your Azure Spring Apps instance in the search box at the top of the screen and select your instance.
+1. Under **Settings**, select **Apps** and select the application from the list.
+1. Select **Service Connector** from the left table of contents and select **Create**.
- mysql> CREATE USER 'root' IDENTIFIED BY 'petclinic';
- Query OK, 0 rows affected (0.11 sec)
+ :::image type="content" source="./media\quickstart-integrate-azure-database-mysql\create-service-connection.png" alt-text="Screenshot of the Azure portal, in the Azure Spring Apps instance, create a connection with Service Connector.":::
- mysql> GRANT ALL PRIVILEGES ON petclinic.* TO 'root';
- Query OK, 0 rows affected (1.29 sec)
+1. Select or enter the following settings in the table.
- mysql> CALL mysql.az_load_timezone();
- Query OK, 3179 rows affected, 1 warning (6.34 sec)
+ | Setting | Example | Description |
+ ||--|-|
+ | **Service type** | *DB for MySQL flexible server* | Select DB for MySQL flexible server as your target service |
+ | **Subscription** | *my-subscription* | The subscription that contains your target service. The default value is the subscription that contains the app deployed to Azure Spring Apps. |
+ | **Connection name** | *mysql_rk29a* | The connection name that identifies the connection between your app and target service. Use the connection name provided by Service Connector or enter your own connection name. |
+ | **MySQL flexible server** | *MySQL80* | Select the MySQL flexible server you want to connect to. |
+ | **MySQL database** | *petclinic* | Select the database you created earlier. |
+ | **Client type** | *.NET* | Select the application stack that works with the target service you selected. |
- mysql> SELECT name FROM mysql.time_zone_name;
- ...
+ :::image type="content" source="./media\quickstart-integrate-azure-database-mysql\basics-tab.png" alt-text="Screenshot of the Azure portal, filling out the basics tab in Service Connector.":::
- mysql> quit
- Bye
- ```
+1. Select **Next: Authentication** to select the authentication type. Then select **Connection string > Database credentials** and enter your database username and password.
-1. Set timezone.
+1. Select **Next: Networking** to select the network configuration and select **Configure firewall rules to enable access to target service**. Enter your username and password so that your app can reach the database.
- ```azcli
- az mysql server configuration set \
- --resource-group ${RESOURCE_GROUP} \
- --name time_zone \
- --server ${MYSQL_SERVER_NAME} \
- --value "US/Pacific"
- ```
+1. Select **Next: Review + Create** to review the provided information. Wait a few seconds for Service Connector to validate the information and select **Create** to create the service connection.
-## Update Apps to use MySQL database
++
+## Check connection to MySQL database
+
+### [Azure CLI](#tab/azure-cli)
-To enable MySQL as database for the sample app, simply update the *customer-service* app with active profile MySQL and database credentials as environment variables.
+Run the `az spring connection validate` command to show the status of the connection between Azure Spring Apps and the Azure MySQL database. Replace the placeholders below with your own information.
-```azcli
-az spring app update \
- --name customers-service \
- --jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql" \
- --env \
- MYSQL_SERVER_FULL_NAME=${MYSQL_SERVER_FULL_NAME} \
- MYSQL_DATABASE_NAME=${MYSQL_DATABASE_NAME} \
- MYSQL_SERVER_ADMIN_LOGIN_NAME=${MYSQL_SERVER_ADMIN_LOGIN_NAME} \
- MYSQL_SERVER_ADMIN_PASSWORD=${MYSQL_SERVER_ADMIN_PASSWORD}
+```azurecli-interactive
+az spring connection validate
+ --resource-group <Azure-Spring-Apps-resource-group-name> \
+ --service <Azure-Spring-Apps-resource-name> \
+ --app <app-name> \
+ --connection <connection-name>
```
-## Update extra apps
-
-```azcli
-az spring app update --name api-gateway \
- --jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql"
-az spring app update --name admin-server \
- --jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql"
-az spring app update --name customers-service \
- --jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql" \
- --env \
- MYSQL_SERVER_FULL_NAME=${MYSQL_SERVER_FULL_NAME} \
- MYSQL_DATABASE_NAME=${MYSQL_DATABASE_NAME} \
- MYSQL_SERVER_ADMIN_LOGIN_NAME=${MYSQL_SERVER_ADMIN_LOGIN_NAME} \
- MYSQL_SERVER_ADMIN_PASSWORD=${MYSQL_SERVER_ADMIN_PASSWORD}
-az spring app update --name vets-service \
- --jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql" \
- --env \
- MYSQL_SERVER_FULL_NAME=${MYSQL_SERVER_FULL_NAME} \
- MYSQL_DATABASE_NAME=${MYSQL_DATABASE_NAME} \
- MYSQL_SERVER_ADMIN_LOGIN_NAME=${MYSQL_SERVER_ADMIN_LOGIN_NAME} \
- MYSQL_SERVER_ADMIN_PASSWORD=${MYSQL_SERVER_ADMIN_PASSWORD}
-az spring app update --name visits-service \
- --jvm-options="-Xms2048m -Xmx2048m -Dspring.profiles.active=mysql" \
- --env \
- MYSQL_SERVER_FULL_NAME=${MYSQL_SERVER_FULL_NAME} \
- MYSQL_DATABASE_NAME=${MYSQL_DATABASE_NAME} \
- MYSQL_SERVER_ADMIN_LOGIN_NAME=${MYSQL_SERVER_ADMIN_LOGIN_NAME} \
- MYSQL_SERVER_ADMIN_PASSWORD=${MYSQL_SERVER_ADMIN_PASSWORD}
+The following output is displayed:
+
+```Output
+Name Result
+- --
+The target existence is validated success
+The target service firewall is validated success
+The configured values (except username/password) is validated success
```
+> [!TIP]
+> To get more details about the connection between your services, remove `--output table` from the above command.
+
+### [Portal](#tab/azure-portal)
+
+Azure Spring Apps connections are displayed under **Settings > Service Connector**. Select **Validate** to check your connection status, and select **Learn more** to review the connection validation details.
++++ ## Clean up resources
-If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following commands:
+If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When no longer needed, delete the resource group by using the [az group delete](/cli/azure/group#az-group-delete) command, which deletes the resources in the resource group. Replace `<resource-group>` with the name of your resource group.
```azurecli
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
+az group delete --name <resource-group>
``` ## Next steps
static-web-apps Add Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/add-api.md
Title: Add an API to Azure Static Web Apps with Azure Functions description: Get started with Azure Static Web Apps by adding a Serverless API to your static web app using Azure Functions. -+ Previously updated : 12/03/2021- Last updated : 08/29/2022+
You can add serverless APIs to Azure Static Web Apps that are powered by Azure F
## Create the static web app
-Before adding an API, create and deploy a frontend application to Azure Static Web Apps. Use an existing app that you have already deployed or create one by following the [Building your first static site with Azure Static Web Apps](getting-started.md) quickstart.
+Before adding an API, create and deploy a frontend application to Azure Static Web Apps. Use an existing app that you've already deployed or create one by following the [Building your first static site with Azure Static Web Apps](getting-started.md) quickstart.
In Visual Studio Code, open the root of your app's repository. The folder structure contains the source for your frontend app and the Static Web Apps GitHub workflow in _.github/workflows_ folder.
You create an Azure Functions project for your static web app's API. By default,
1. Press <kbd>F1</kbd> to open the Command Palette.
-1. Select **Azure Static Web Apps: Create HTTP Function...**. If you're prompted to install the Azure Functions extension, install it and re-run this command.
+1. Select **Azure Static Web Apps: Create HTTP Function...**. If you're prompted to install the Azure Functions extension, install it and rerun this command.
1. When prompted, enter the following values:
Update the content of the _src/https://docsupdatetracker.net/index.html_ file with the following code to fetch
(async function() { const { text } = await( await fetch(`/api/message`)).json(); document.querySelector('#name').textContent = text;
- }())
+ }());
</script> </body>
export default {
## Run the frontend and API locally
-To run your frontend app and API together locally, Azure Static Web Apps provides a CLI that emulates the cloud environment. The CLI leverages the Azure Functions Core Tools to run the API.
+To run your frontend app and API together locally, Azure Static Web Apps provides a CLI that emulates the cloud environment. The CLI uses the Azure Functions Core Tools to run the API.
### Install command line tools Ensure you have the necessary command line tools installed.
-1. Install Azure Static Web Apps CLI.
- ```bash
- npm install -g @azure/static-web-apps-cli
- ```
-
-1. Install Azure Functions Core Tools V3.
- ```bash
- npm install -g azure-functions-core-tools@3
- ```
+```bash
+npm install -g @azure/static-web-apps-cli
+```
### Build frontend app
If your app uses a framework, build the app to generate the output before runnin
# [No Framework](#tab/vanilla-javascript)
-There is no need to build the app.
+There's no need to build the app.
# [Angular](#tab/angular)
Run the frontend app and API together by starting the app with the Static Web Ap
# [No Framework](#tab/vanilla-javascript) Pass the current folder (`src`) and the API folder (`api`) to the CLI.
-
+ ```bash swa start src --api-location api ```
Run the frontend app and API together by starting the app with the Static Web Ap
-1. When the CLI processes start, access your app at `http://localhost:4280/`. Notice how the page calls the API and displays its output, `Hello from the API`.
+1. When the CLI processes start, access your app at [http://localhost:4280/](http://localhost:4280/). Notice how the page calls the API and displays its output, `Hello from the API`.
1. To stop the CLI, type <kbd>Ctrl + C</kbd>.
To publish changes to your static web app in Azure, commit and push your code to
1. Select the **Git: Commit All** command.
-1. When prompted for a commit message, enter **add API** and commit all changes to your local git repository.
+1. When prompted for a commit message, enter **feat: add API** and commit all changes to your local git repository.
1. Press <kbd>F1</kbd> to open the Command Palette.
storage Blob Containers Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-cli.md
# Manage blob containers using Azure CLI
-Azure blob storage allows you to store large amounts of unstructured object data. You can use blob storage to gather or expose media, content, or application data to users. Because all blob data is stored within containers, you must create a storage container before you can begin to upload data. To learn more about blob storage, read the [Introduction to Azure Blob storage](storage-blobs-introduction.md).
+Microsoft Azure Blob Storage allows you to store large amounts of unstructured object data. You can use blob storage to gather or expose media, content, or application data to users. Because all blob data is stored within containers, you must create a storage container before you can begin to upload data. To learn more about blob storage, read the [Introduction to Azure Blob storage](storage-blobs-introduction.md).
The Azure CLI is Azure's cross-platform command-line experience for managing Azure resources. You can use it in your browser with Azure Cloud Shell. You can also install it on macOS, Linux, or Windows and run it locally from the command line.
az storage container generate-sas \
## Next steps
-In this how-to article, you learned how to manage containers in Azure blob storage. To learn more about working with blob storage by using Azure CLI, select an option below.
+In this how-to article, you learned how to manage containers in Blob Storage. To learn more about working with blob storage by using Azure CLI, select an option below.
> [!div class="nextstepaction"] > [Manage block blobs with Azure CLI](blob-cli.md)
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
It's helpful to understand some key terms relating to Azure AD Domain Service au
Azure role-based access control (Azure RBAC) enables fine-grained access management for Azure. Using Azure RBAC, you can manage access to resources by granting users the fewest permissions needed to perform their jobs. For more information on Azure RBAC, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
+- **Hybrid identities**
+
+ [Hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) are on-premises AD identities that are synced to the cloud.
+ ## Common use cases Identity-based authentication and support for Windows ACLs on Azure Files is best leveraged for the following use cases:
If you are keeping your primary file storage on-premises, Azure file shares can
## Supported scenarios
-The following table summarizes the supported Azure file shares authentication scenarios for Azure AD DS and on-premises AD DS. We recommend selecting the domain service that you adopted for your client environment for integration with Azure Files. If you have AD DS already setup on-premises or in Azure where your devices are domain joined to your AD, you should choose to leverage AD DS for Azure file shares authentication. Similarly, if you've already adopted Azure AD DS, you should use that for authenticating to Azure file shares.
-
+This section summarizes the supported Azure file shares authentication scenarios for Azure AD DS, on-premises AD DS, and Azure AD Kerberos for hybrid identities (preview). We recommend selecting the domain service that you adopted for your client environment for integration with Azure Files. If you have AD DS already setup on-premises or in Azure where your devices are domain joined to your AD, you should choose to leverage AD DS for Azure file shares authentication. Similarly, if you've already adopted Azure AD DS, you should use that for authenticating to Azure file shares.
-|Azure AD DS authentication | On-premises AD DS authentication |
-|||
-|Azure AD DS-joined Windows machines can access Azure file shares with Azure AD credentials over SMB. |On-premises AD DS-joined or Azure AD DS-joined Windows machines can access Azure file shares with on-premises Active Directory credentials that are synched to Azure AD over SMB. Your client must have line of sight to your AD DS. |
+- **On-premises AD DS authentication:** On-premises AD DS-joined or Azure AD DS-joined Windows machines can access Azure file shares with on-premises Active Directory credentials that are synched to Azure AD over SMB. Your client must have line of sight to your AD DS.
+- **Azure AD DS authentication:** Azure AD DS-joined Windows machines can access Azure file shares with Azure AD credentials over SMB.
+- **Azure AD Kerberos for hybrid identities (preview):** Using Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs.
### Restrictions -- Azure AD DS and on-premises AD DS authentication do not support authentication against computer accounts. You can consider using a service logon account instead.
+- Azure AD DS and on-premises AD DS authentication don't support authentication against computer accounts. You can consider using a service logon account instead.
- Neither Azure AD DS authentication nor on-premises AD DS authentication is supported against Azure AD-joined devices or Azure AD-registered devices.-- Azure file shares only support identity-based authentication against one of the following domain services, either [Azure Active Directory Domain Services (Azure AD DS)](#azure-ad-ds) or [on-premises Active Directory Domain Services (AD DS)](#ad-ds).-- Neither identity-based authentication method is supported with Network File System (NFS) shares.
+- Identity-based authentication isn't supported with Network File System (NFS) shares.
## Advantages of identity-based authentication Identity-based authentication for Azure Files offers several benefits over using Shared Key authentication:
The following diagram represents the workflow for Azure AD DS authentication to
:::image type="content" source="media/storage-files-active-directory-overview/Files-Azure-AD-DS-Diagram.png" alt-text="Diagram":::
+### Azure AD Kerberos for hybrid identities (preview)
+
+Enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring access control lists (ACLs) and permissions might require line-of-sight to the domain controller.
+
+For more information on this preview feature, see [Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files](storage-files-identity-auth-azure-active-directory-enable.md).
+ ### Enable identity-based authentication You can enable identity-based authentication with either Azure AD DS or on-premises AD DS for Azure file shares on your new and existing storage accounts. Only one domain service can be used for file access authentication on the storage account, which applies to all file shares in the account. Detailed guidance on setting up your file shares for authentication with Azure AD DS in our article [Enable Azure Active Directory Domain Services authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md) and guidance for on-premises AD DS in our other article, [Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md).
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
description: Learn how to enable identity-based authentication over Server Messa
Previously updated : 08/17/2022 Last updated : 08/29/2022
# Enable Azure Active Directory Domain Services authentication on Azure Files
-[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) through two types of Domain
+[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) using three different methods: on-premises Active Directory Domain Services (AD DS), Azure Active Directory Domain Services (Azure AD DS), and Azure Active Directory (Azure AD) Kerberos for hybrid identities (preview). We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the domain service you choose. This article focuses on enabling and configuring Azure AD DS for authentication with Azure file shares.
If you are new to Azure file shares, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles.
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Previously updated : 03/15/2021 Last updated : 08/29/2022 # Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares
-[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) through two types of Domain
+[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) using three different methods: on-premises Active Directory Domain Services (AD DS), Azure Active Directory Domain Services (Azure AD DS), and Azure Active Directory (Azure AD) Kerberos for hybrid identities (preview). We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the domain service you choose. This article focuses on enabling and configuring Azure AD DS for authentication with Azure file shares.
If you're new to Azure file shares, we recommend reading our [planning guide](storage-files-planning.md) before reading the following series of articles.
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
+
+ Title: Use Azure Active Directory to authorize access to Azure files over SMB for hybrid identities using Kerberos authentication (preview)
+description: Learn how to enable identity-based Kerberos authentication for hybrid user identities over Server Message Block (SMB) for Azure Files through Azure Active Directory. Your users can then access Azure file shares by using their Azure AD credentials (preview).
+++ Last updated : 08/29/2022++++
+# Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files (preview)
+
+> [!IMPORTANT]
+> Azure Files authentication with Azure Active Directory Kerberos is currently in public preview.
+> This preview version is provided without a service level agreement, and isn't recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+For more information on all supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md). For more information about Azure Active Directory (AD) Kerberos, see [Deep dive: How Azure AD Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889).
+
+[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) using the Kerberos authentication protocol through the following three methods:
+
+- On-premises Active Directory Domain Services (AD DS)
+- Azure Active Directory Domain Services (Azure AD DS)
+- Azure Active Directory Kerberos (Azure AD) for hybrid user identities only
+
+This article focuses on the last method: enabling and configuring Azure AD for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring access control lists (ACLs) and permissions might require line-of-sight to the domain controller.
+
+> [!NOTE]
+> Your Azure Storage account can't authenticate with both Azure AD and a second method like AD DS or Azure AD DS. You can only use one authentication method. If you've already chosen another authentication method for your storage account, you must disable it before enabling Azure AD Kerberos.
+
+## Applies to
+| File share type | SMB | NFS |
+|-|:-:|:-:|
+| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+
+## Prerequisites
+
+Before you enable Azure AD over SMB for Azure file shares, make sure you've completed the following prerequisites.
+
+The Azure AD Kerberos functionality for hybrid identities is only available on the following operating systems:
+
+ - Windows 11 Enterprise single or multi-session.
+ - Windows 10 Enterprise single or multi-session, versions 2004 or later with the latest cumulative updates installed, especially the [KB5007253 - 2021-11 Cumulative Update Preview for Windows 10](https://support.microsoft.com/topic/november-22-2021-kb5007253-os-builds-19041-1387-19042-1387-19043-1387-and-19044-1387-preview-d1847be9-46c1-49fc-bf56-1d469fc1b3af).
+ - Windows Server, version 2022 with the latest cumulative updates installed, especially the [KB5007254 - 2021-11 Cumulative Update Preview for Microsoft server operating system version 21H2](https://support.microsoft.com/topic/november-22-2021-kb5007254-os-build-20348-380-preview-9a960291-d62e-486a-adcc-6babe5ae6fc1).
+
+To learn how to create and configure a Windows VM and log in by using Azure AD-based authentication, see [Log in to a Windows virtual machine in Azure by using Azure AD](../../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md).
+
+This feature doesn't currently support user accounts that you create and manage solely in Azure AD. User accounts must be [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need AD DS and Azure AD Connect. You must create these accounts in Active Directory and sync them to Azure AD. To assign Azure Role-Based Access Control (RBAC) permissions for the Azure file share to a user group, you must create the group in Active Directory and sync it to Azure AD.
+
+You must disable multi-factor authentication (MFA) on the Azure AD app representing the storage account.
+
+Azure AD Kerberos authentication only supports using AES-256 encryption.
+
+## Regional availability
+
+Azure Files authentication with Azure AD Kerberos public preview is available in Azure public cloud in [all Azure regions](https://azure.microsoft.com/global-infrastructure/locations/).
+
+## Enable Azure AD Kerberos authentication for hybrid user accounts (preview)
+
+To enable Azure AD Kerberos authentication on Azure Files for hybrid user accounts (preview), use the Azure portal.
+
+1. Sign in to the Azure portal and select the storage account you want to enable Azure AD Kerberos authentication for.
+1. Under **Data storage**, select **File shares**.
+1. Next to **Active Directory**, select the configuration status (for example, **Not configured**).
+
+ :::image type="content" source="media/storage-files-identity-auth-azure-active-directory-enable/configure-active-directory.png" alt-text="Screenshot of the Azure portal showing file share settings for a storage account. Active Directory configuration settings are selected." lightbox="media/storage-files-identity-auth-azure-active-directory-enable/configure-active-directory.png" border="true":::
+
+1. Under **Azure AD Kerberos (preview)**, select **Set up**.
+1. Select the **Azure AD Kerberos** checkbox.
+
+ :::image type="content" source="media/storage-files-identity-auth-azure-active-directory-enable/setup-azure-ad-kerberos.png" alt-text="Screenshot of the Azure portal showing Active Directory configuration settings for a storage account. Azure AD Kerberos is selected." lightbox="media/storage-files-identity-auth-azure-active-directory-enable/setup-azure-ad-kerberos.png" border="true":::
+
+1. Optional: If you want to configure directory and file-level permissions through Windows File Explorer, then you also need to specify the domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or by running the following PowerShell cmdlets from an on-premises AD-joined client:
+
+ ```PowerShell
+ $domainInformation = Get-ADDomain
+ $domainGuid = $domainInformation.ObjectGUID.ToString()
+ $domainName = $domainInformation.DnsRoot
+ ```
+
+ If you'd prefer to configure directory and file-level permissions using icacls, you can skip this step. However, if you want to use icacls, the client will need line-of-sight to the on-premises AD.
+
+1. Select **Save**.
+
+## Grant admin consent to the new service principal
+
+After enabling Azure AD Kerberos authentication, you'll need to explicitly grant admin consent to the new Azure AD application registered in your Azure AD tenant to complete your configuration. You can configure the API permissions from the [Azure portal](https://portal.azure.com) by following these steps:
+
+1. Open **Azure Active Directory**.
+2. Select **App registrations** on the left pane.
+3. Select **All Applications**.
+
+ :::image type="content" source="media/storage-files-identity-auth-azure-active-directory-enable/azure-portal-azuread-app-registrations.png" alt-text="Screenshot of the Azure portal. Azure Active Directory is open. App registrations is selected in the left pane. All applications is highlighted in the right pane." lightbox="media/storage-files-identity-auth-azure-active-directory-enable/azure-portal-azuread-app-registrations.png":::
+
+4. Select the application with the name matching **[Storage Account] $storageAccountName.file.core.windows.net**.
+5. Select **API permissions** in the left pane.
+6. Select **Add permissions** at the bottom of the page.
+7. Select **Grant admin consent for "DirectoryName"**.
+
+## Disable multi-factor authentication on the storage account
+
+Azure AD Kerberos doesn't support using MFA to access Azure file shares configured with Azure AD Kerberos. You must exclude the Azure AD app representing your storage account from your MFA conditional access policies if they apply to all apps. The storage account app should have the same name as the storage account in the conditional access exclusion list.
+
+ > [!IMPORTANT]
+ > If you don't exclude MFA policies from the storage account app, you won't be able to access the file share. Trying to map the file share using *net use* will result in an error message that says "System error 1327: Account restrictions are preventing this user from signing in. For example: blank passwords aren't allowed, sign-in times are limited, or a policy restriction has been enforced."
+
+## Assign share-level permissions
+
+When you enable identity-based access, you can set for each share which users and groups have access to that particular share. Once a user is allowed into a share, NTFS permissions on individual files and folders take over. This allows for fine-grained control over permissions, similar to an SMB share on a Windows server.
+
+To set share-level permissions, follow the instructions in [Assign share-level permissions to an identity](storage-files-identity-ad-ds-assign-permissions.md).
+
+## Configure directory and file-level permissions
+
+Once your share-level permissions are in place, there are two options for configuring directory and file-level permissions with Azure AD Kerberos authentication:
+
+- **Windows Explorer experience:** If you choose this option, then the client must be domain-joined to the on-premises AD.
+- **icacls utility:** If you choose this option, then the client needs line-of-sight to the on-premises AD.
+
+To configure directory and file level permissions through Windows File explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step is not required.
+
+To configure directory and file-level permissions, follow the instructions in [Configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
+
+## Disable Azure AD authentication on your storage account
+
+If you want to use another authentication method, you can disable Azure AD authentication on your storage account by using the Azure portal.
+
+> [!NOTE]
+> Disabling this feature means that there will be no Active Directory configuration for file shares in your storage account until you enable one of the other Active Directory sources to reinstate your Active Directory configuration.
+
+1. Sign in to the Azure portal and select the storage account you want to enable Azure AD Kerberos authentication for.
+1. Under **Data storage**, select **File shares**.
+1. Next to **Active Directory**, select the configuration status (for example, **Not configured**).
+1. Under **Azure AD Kerberos (preview)**, select **Set up**.
+1. Uncheck the **Azure AD Kerberos** checkbox.
+1. Select **Save**.
+
+## Next steps
+
+For more information, see these resources:
+
+- [Overview of Azure Files identity-based authentication support for SMB access](storage-files-active-directory-overview.md)
+- [Enable AD DS authentication to Azure file shares](storage-files-identity-ad-ds-enable.md)
+- [Create a profile container with Azure Files and Azure Active Directory (preview)](../../virtual-desktop/create-profile-container-azure-ad.md)
+- [FAQ](storage-files-faq.md)
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
description: Understand planning for an Azure Files deployment. You can either d
Previously updated : 08/02/2022 Last updated : 08/29/2022
When deploying Azure file shares into storage accounts, we recommend:
- Only deploying GPv2 and FileStorage accounts and upgrading GPv1 and classic storage accounts when you find them in your environment. ## Identity
-To access an Azure file share, the user of the file share must be authenticated and authorized to access the share. This is done based on the identity of the user accessing the file share. Azure Files integrates with three main identity providers:
+To access an Azure file share, the user of the file share must be authenticated and authorized to access the share. This is done based on the identity of the user accessing the file share. Azure Files integrates with four main identity providers:
- **On-premises Active Directory Domain Services (AD DS, or on-premises AD DS)**: Azure storage accounts can be domain joined to a customer-owned Active Directory Domain Services, just like a Windows Server file server or NAS device. You can deploy a domain controller on-premises, in an Azure VM, or even as a VM in another cloud provider; Azure Files is agnostic to where your domain controller is hosted. Once a storage account is domain-joined, the end user can mount a file share with the user account they signed into their PC with. AD-based authentication uses the Kerberos authentication protocol. - **Azure Active Directory Domain Services (Azure AD DS)**: Azure AD DS provides a Microsoft-managed domain controller that can be used for Azure resources. Domain joining your storage account to Azure AD DS provides similar benefits to domain joining it to a customer-owned Active Directory. This deployment option is most useful for application lift-and-shift scenarios that require AD-based permissions. Since Azure AD DS provides AD-based authentication, this option also uses the Kerberos authentication protocol.
+- **Azure Active Directory (Azure AD) Kerberos for hybrid identities (preview)**: Azure AD Kerberos allows you to use Azure AD to authenticate [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This configuration uses Azure AD to issue Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs.
- **Azure storage account key**: Azure file shares may also be mounted with an Azure storage account key. To mount a file share this way, the storage account name is used as the username and the storage account key is used as a password. Using the storage account key to mount the Azure file share is effectively an administrator operation, because the mounted file share will have full permissions to all of the files and folders on the share, even if they have ACLs. When using the storage account key to mount over SMB, the NTLMv2 authentication protocol is used. For customers migrating from on-premises file servers, or creating new file shares in Azure Files intended to behave like Windows file servers or NAS appliances, domain joining your storage account to **Customer-owned Active Directory** is the recommended option. To learn more about domain joining your storage account to a customer-owned Active Directory, see [Azure Files Active Directory overview](storage-files-active-directory-overview.md).
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
Title: Troubleshoot Azure Files problems in Windows
-description: Troubleshooting Azure Files problems in Windows. See common issues related to Azure Files when you connect from Windows clients, and see possible resolutions. Only for SMB shares
+description: Troubleshoot problems with SMB Azure file shares in Windows. See common issues related to Azure Files when you connect from Windows clients, and see possible resolutions.
Previously updated : 08/04/2022 Last updated : 08/26/2022
Windows 8, Windows Server 2012, and later versions of each system negotiate requ
3. Verify the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is disabled on the storage account if the client does not support SMB encryption. ### Cause 2: Virtual network or firewall rules are enabled on the storage account
-Network traffic is denied if virtual network (VNET) and firewall rules are configured on the storage account, unless the client IP address or virtual network is allow listed.
+Network traffic is denied if virtual network (VNET) and firewall rules are configured on the storage account, unless the client IP address or virtual network is allow-listed.
### Solution for cause 2
Enable Azure AD DS on the Azure AD tenant of the subscription that your storage
### Self diagnostics steps First, make sure that you have followed through all four steps to [enable Azure Files AD Authentication](./storage-files-identity-auth-active-directory-enable.md).
-Second, try [mounting Azure file share with storage account key](./storage-how-to-use-files-windows.md). If you failed to mount, download [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) to help you validate the client running environment, detect the incompatible client configuration which would cause access failure for Azure Files, gives prescriptive guidance on self-fix and, collect the diagnostics traces.
+Second, try [mounting Azure file share with storage account key](./storage-how-to-use-files-windows.md). If you failed to mount, download [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) to help you validate the client running environment, detect the incompatible client configuration which would cause access failure for Azure Files, give prescriptive guidance on self-fix and collect the diagnostics traces.
Third, you can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on [AzFilesHybrid v0.1.2+ version](https://github.com/Azure-Samples/azure-files-samples/releases). You need to run this cmdlet with an AD user that has owner permission on the target storage account. ```PowerShell
Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGrou
``` The cmdlet performs these checks below in sequence and provides guidance for failures: 1. CheckADObjectPasswordIsCorrect: Ensure that the password configured on the AD identity that represents the storage account is matching that of the storage account kerb1 or kerb2 key. If the password is incorrect, you can run [Update-AzStorageAccountADObjectPassword](./storage-files-identity-ad-ds-update-password.md) to reset the password.
-2. CheckADObject: Confirm that there is an object in the Active Directory that represents the storage account and has the correct SPN (service principal name). If the SPN isn't correctly setup, please run the Set-AD cmdlet returned in the debug cmdlet to configure the SPN.
+2. CheckADObject: Confirm that there is an object in the Active Directory that represents the storage account and has the correct SPN (service principal name). If the SPN isn't correctly set up, please run the Set-AD cmdlet returned in the debug cmdlet to configure the SPN.
3. CheckDomainJoined: Validate that the client machine is domain joined to AD. If your machine is not domain joined to AD, please refer to this [article](/windows-server/identity/ad-fs/deployment/join-a-computer-to-a-domain) for domain join instruction. 4. CheckPort445Connectivity: Check that Port 445 is opened for SMB connection. If the required Port is not open, please refer to the troubleshooting tool [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) for connectivity issues with Azure Files. 5. CheckSidHasAadUser: Check that the logged on AD user is synced to Azure AD. If you want to look up whether a specific AD user is synchronized to Azure AD, you can specify the -UserName and -Domain in the input parameters.
After enabling Azure AD Kerberos authentication, you'll need to explicitly grant
6. Select **Add permissions** at the bottom of the page. 7. Select **Grant admin consent for "DirectoryName"**.
-## Need help? Contact support.
+## Potential errors when enabling Azure AD Kerberos authentication for hybrid users
+
+You might encounter the following errors when trying to enable Azure AD Kerberos authentication for hybrid user accounts, which is currently in public preview.
+
+### Error - Grant admin consent disabled
+
+In some cases, Azure AD admin may disable the ability to grant admin consent to Azure AD applications. Below is the screenshot of what this may look like in the Azure portal.
+
+ :::image type="content" source="media/storage-troubleshoot-windows-file-connection-problems/grant-admin-consent-disabled.png" alt-text="Screenshot of the Azure portal configured permissions blade displaying a warning that some actions may be disabled due to your permissions." lightbox="media/storage-troubleshoot-windows-file-connection-problems/grant-admin-consent-disabled.png":::
+
+If this is the case, ask your Azure AD admin to grant admin consent to the new Azure AD application. To find and view your administrators, select **roles and administrators**, then select **Cloud application administrator**.
+
+### Error - "The request to AAD Graph failed with code BadRequest"
+
+#### Cause 1: an application management policy is preventing credentials from being created
+
+When enabling Azure AD Kerberos authentication, you might encounter this error if the following conditions are met:
+
+1. You're using the beta/preview feature of [application management policies](/graph/api/resources/applicationauthenticationmethodpolicy?view=graph-rest-beta).
+2. You (or your administrator) have set a [tenant-wide policy](/graph/api/resources/tenantappmanagementpolicy?view=graph-rest-beta) that:
+ - Has no start date, or has a start date before 2019-01-01
+ - Sets a restriction on service principal passwords, which either disallows custom passwords or sets a maximum password lifetime of less than 365.5 days
+
+There is currently no workaround for this error during the public preview.
+
+#### Cause 2: an application already exists for the storage account
+
+You might also encounter this error if you have previously enabled Azure AD Kerberos authentication through manual limited preview steps. To delete the existing application, the customer or their IT admin can run the following script. Running this script will remove the old manually created application and allow the new experience to auto-create and manage the newly created application.
+
+> [!IMPORTANT]
+> This script must be run in PowerShell 5 because the AzureAD module doesn't work in PowerShell 7. This PowerShell snippet uses Azure AD Graph.
+
+```powershell
+$storageAccount = "exampleStorageAccountName"
+$tenantId = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
+Import-Module AzureAD
+Connect-AzureAD -TenantId $tenantId
+
+$application = Get-AzureADApplication -Filter "DisplayName eq '${storageAccount}'"
+if ($null -ne $application) {
+ Remove-AzureADApplication -ObjectId $application.ObjectId
+}
+```
+
+## Need help?
If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
stream-analytics Power Bi Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/power-bi-output.md
The following table lists property names and their descriptions to configure you
| Table name |Provide a table name under the dataset of the Power BI output. Currently, Power BI output from Stream Analytics jobs can have only one table in a dataset. | | Authorize connection | You need to authorize with Power BI to configure your output settings. Once you grant this output access to your Power BI dashboard, you can revoke access by changing the user account password, deleting the job output, or deleting the Stream Analytics job. |
-For a walkthrough of configuring a Power BI output and dashboard, see the [Azure Stream Analytics and Power BI](stream-analytics-power-bi-dashboard.md) tutorial.
+For a walkthrough of configuring a Power BI output and dashboard, see the [Tutorial: Analyze fraudulent call data with Stream Analytics and visualize results in Power BI dashboard](stream-analytics-real-time-fraud-detection.md) tutorial.
> [!NOTE] > Don't explicitly create the dataset and table in the Power BI dashboard. The dataset and table are automatically populated when the job is started and the job starts pumping output into Power BI. If the job query doesn't generate any results, the dataset and table aren't created. If Power BI already had a dataset and table with the same name as the one provided in this Stream Analytics job, the existing data is overwritten.
stream-analytics Powerbi Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/powerbi-output-managed-identity.md
Below are the limitations of this feature:
## Next steps
-* [Power BI dashboard integration with Azure Stream Analytics](./stream-analytics-power-bi-dashboard.md)
+- [Tutorial: Analyze fraudulent call data with Stream Analytics and visualize results in Power BI dashboard](stream-analytics-real-time-fraud-detection.md)
* [Understand outputs from Azure Stream Analytics](./stream-analytics-define-outputs.md)
stream-analytics Stream Analytics Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-power-bi-dashboard.md
- Title: Power BI dashboard integration with Azure Stream Analytics
-description: This article describes how to use a real-time Power BI dashboard to visualize data out of an Azure Stream Analytics job.
-- Previously updated : 11/16/2020--
-# Stream Analytics and Power BI: A real-time analytics dashboard for streaming data
-
-Azure Stream Analytics enables you to take advantage of one of the leading business intelligence tools, [Microsoft Power BI](https://powerbi.com/). In this article, you learn how create business intelligence tools by using Power BI as an output for your Azure Stream Analytics jobs. You also learn how to create and use a real-time dashboard that is continuously updated by the Stream Analytics job.
-
-This article continues from the Stream Analytics [real-time fraud detection](stream-analytics-real-time-fraud-detection.md) tutorial. It builds on the workflow created in that tutorial and adds a Power BI output so that you can visualize fraudulent phone calls that are detected by a Streaming Analytics job.
-
-You can watch [a video](https://www.youtube.com/watch?v=SGUpT-a99MA) that illustrates this scenario.
--
-## Prerequisites
-
-Before you start, make sure you have the following:
-
-* An Azure account.
-* An account for Power BI Pro. You can use a work account or a school account.
-* A completed version of the [real-time fraud detection](stream-analytics-real-time-fraud-detection.md) tutorial. The tutorial includes an app that generates fictitious telephone-call metadata. In the tutorial, you create an event hub and send the streaming phone call data to the event hub. You write a query that detects fraudulent calls (calls from the same number at the same time in different locations).
--
-## Add Power BI output
-In the real-time fraud detection tutorial, the output is sent to Azure Blob storage. In this section, you add an output that sends information to Power BI.
-
-1. In the Azure portal, open the Streaming Analytics job that you created earlier. If you used the suggested name, the job is named `sa_frauddetection_job_demo`.
-
-2. On the left menu, select **Outputs** under **Job topology**. Then, select **+ Add** and choose **Power BI** from the dropdown menu.
-
-3. Select **+ Add** > **Power BI**. Then fill the form with the following details and select **Authorize** to use your own user identity to connect to Power BI (the token is valid for 90 days).
-
->[!NOTE]
->For production jobs, we recommend to connect to [use Managed Identity to authenticate your Azure Stream Analytics job to Power BI](./powerbi-output-managed-identity.md).
-
- |**Setting** |**Suggested value** |
- |||
- |Output alias | CallStream-PowerBI |
- |Dataset name | sa-dataset |
- |Table name | fraudulent-calls |
-
- ![Configure Stream Analytics output](media/stream-analytics-power-bi-dashboard/configure-stream-analytics-output.png)
-
- > [!WARNING]
- > If Power BI has a dataset and table that have the same names as the ones that you specify in the Stream Analytics job, the existing ones are overwritten.
- > We recommend that you do not explicitly create this dataset and table in your Power BI account. They are automatically created when you start your Stream Analytics job and the job starts pumping output into Power BI. If your job query doesn't return any results, the dataset and table are not created.
- >
-
-4. When you select **Authorize**, a pop-up window opens and you are asked to provide credentials to authenticate to your Power BI account. Once the authorization is successful, **Save** the settings.
-
-8. Click **Create**.
-
-The dataset is created with the following settings:
-
-* **defaultRetentionPolicy: BasicFIFO** - Data is FIFO, with a maximum of 200,000 rows.
-* **defaultMode: hybrid** - The dataset supports both streaming tiles (also known as push) and traditional report-based visuals. For the push content, the data is continuously updated from the stream analytics job in this case, with no need to schedule refresh from the Power BI side.
-
-Currently, you can't create datasets with other flags.
-
-For more information about Power BI datasets, see the [Power BI REST API](/rest/api/power-bi/) reference.
--
-## Write the query
-
-1. Close the **Outputs** blade and return to the job blade.
-
-2. Click the **Query** box.
-
-3. Enter the following query. This query is similar to the self-join query you created in the fraud-detection tutorial. The difference is that this query sends results to the new output you created (`CallStream-PowerBI`).
-
- >[!NOTE]
- >If you did not name the input `CallStream` in the fraud-detection tutorial, substitute your name for `CallStream` in the **FROM** and **JOIN** clauses in the query.
-
- ```SQL
- /* Our criteria for fraud:
- Calls made from the same caller to two phone switches in different locations (for example, Australia and Europe) within five seconds */
-
- SELECT System.Timestamp AS WindowEnd, COUNT(*) AS FraudulentCalls
- INTO "CallStream-PowerBI"
- FROM "CallStream" CS1 TIMESTAMP BY CallRecTime
- JOIN "CallStream" CS2 TIMESTAMP BY CallRecTime
-
- /* Where the caller is the same, as indicated by IMSI (International Mobile Subscriber Identity) */
- ON CS1.CallingIMSI = CS2.CallingIMSI
-
- /* ...and date between CS1 and CS2 is between one and five seconds */
- AND DATEDIFF(ss, CS1, CS2) BETWEEN 1 AND 5
-
- /* Where the switch location is different */
- WHERE CS1.SwitchNum != CS2.SwitchNum
- GROUP BY TumblingWindow(Duration(second, 1))
- ```
-
-4. Click **Save**.
--
-## Test the query
-
-This section is optional, but recommended.
-
-1. If the TelcoStreaming app is not currently running, start it by following these steps:
-
- * Open Command Prompt.
- * Go to the folder where the telcogenerator.exe and modified telcodatagen.exe.config files are.
- * Run the following command:
-
- `telcodatagen.exe 1000 .2 2`
-
-2. On the **Query** page for your Stream Analytics job, click the dots next to the `CallStream` input and then select **Sample data from input**.
-
-3. Specify that you want three minutes' worth of data and click **OK**. Wait until you're notified that the data has been sampled.
-
-4. Click **Test** and review the results.
-
-## Run the job
-
-1. Make sure the TelcoStreaming app is running.
-
-2. Navigate to the **Overview** page for your Stream Analytics job and select **Start**.
-
- ![Start the Stream Analytics job](./media/stream-analytics-power-bi-dashboard/stream-analytics-sa-job-start-output.png)
-
-Your Streaming Analytics job starts looking for fraudulent calls in the incoming stream. The job also creates the dataset and table in Power BI and starts sending data about the fraudulent calls to them.
--
-## Create the dashboard in Power BI
-
-1. Go to [Powerbi.com](https://powerbi.com) and sign in with your work or school account. If the Stream Analytics job query outputs results, you see that your dataset is already created:
-
- ![Streaming dataset location in Power BI](./media/stream-analytics-power-bi-dashboard/stream-analytics-streaming-dataset.png)
-
-2. In your workspace, click **+&nbsp;Create**.
-
- ![The Create button in Power BI workspace](./media/stream-analytics-power-bi-dashboard/pbi-create-dashboard.png)
-
-3. Create a new dashboard and name it `Fraudulent Calls`.
-
- ![Create a dashboard and give it a name in Power BI workspace](./media/stream-analytics-power-bi-dashboard/pbi-create-dashboard-name.png)
-
-4. At the top of the window, click **Add tile**, select **CUSTOM STREAMING DATA**, and then click **Next**.
-
- ![Custom streaming dataset tile in Power BI](./media/stream-analytics-power-bi-dashboard/custom-streaming-data.png)
-
-5. Under **YOUR DATSETS**, select your dataset and then click **Next**.
-
- ![Your streaming dataset in Power BI](./media/stream-analytics-power-bi-dashboard/your-streaming-dataset.png)
-
-6. Under **Visualization Type**, select **Card**, and then in the **Fields** list, select **fraudulentcalls**.
-
- ![Visualization details for new tile](./media/stream-analytics-power-bi-dashboard/add-fraudulent-calls-tile.png)
-
-7. Click **Next**.
-
-8. Fill in tile details like a title and subtitle.
-
- ![Title and subtitle for new tile](./media/stream-analytics-power-bi-dashboard/pbi-new-tile-details.png)
-
-9. Click **Apply**.
-
- Now you have a fraud counter!
-
- ![Fraud counter in Power BI dashboard](./media/stream-analytics-power-bi-dashboard/power-bi-fraud-counter-tile.png)
-
-8. Follow the steps again to add a tile (starting with step 4). This time, do the following:
-
- * When you get to **Visualization Type**, select **Line chart**.
- * Add an axis and select **windowend**.
- * Add a value and select **fraudulentcalls**.
- * For **Time window to display**, select the last 10 minutes.
-
- ![Create tile for line chart in Power BI](./media/stream-analytics-power-bi-dashboard/pbi-create-tile-line-chart.png)
-
-9. Click **Next**, add a title and subtitle, and click **Apply**.
-
- The Power BI dashboard now gives you two views of data about fraudulent calls as detected in the streaming data.
-
- ![Finished Power BI dashboard showing two tiles for fraudulent calls](./media/stream-analytics-power-bi-dashboard/pbi-dashboard-fraudulent-calls-finished.png)
-
-## Learn about limitations and best practices
-Currently, Power BI can be called roughly once per second. Streaming visuals support packets of 15 KB. Beyond that, streaming visuals fail (but push continues to work). Because of these limitations, Power BI lends itself most naturally to cases where Azure Stream Analytics does a significant data load reduction. We recommend using a Tumbling window or Hopping window to ensure that data push is at most one push per second, and that your query lands within the throughput requirements.
-
-You can use the following equation to compute the value to give your window in seconds:
-
-![Equation to compute value to give window in seconds](./media/stream-analytics-power-bi-dashboard/compute-window-seconds-equation.png)
-
-For example:
-
-* You have 1,000 devices sending data at one-second intervals.
-* You are using the Power BI Pro SKU that supports 1,000,000 rows per hour.
-* You want to publish the amount of average data per device to Power BI.
-
-As a result, the equation becomes:
-
-![Equation based on example criteria](./media/stream-analytics-power-bi-dashboard/power-bi-example-equation.png)
-
-Given this configuration, you can change the original query to the following:
-
-```SQL
- SELECT
- MAX(hmdt) AS hmdt,
- MAX(temp) AS temp,
- System.TimeStamp AS time,
- dspl
- INTO "CallStream-PowerBI"
- FROM
- Input TIMESTAMP BY time
- GROUP BY
- TUMBLINGWINDOW(ss,4),
- dspl
-```
-
-### Renew authorization
-If the password has changed since your job was created or last authenticated, you need to reauthenticate your Power BI account. If Azure AD Multi-Factor Authentication is configured on your Azure Active Directory (Azure AD) tenant, you also need to renew Power BI authorization every two weeks. If you don't renew, you could see symptoms such as a lack of job output or an `Authenticate user error` in the operation logs.
-
-Similarly, if a job starts after the token has expired, an error occurs and the job fails. To resolve this issue, stop the job that's running and go to your Power BI output. To avoid data loss, select the **Renew authorization** link, and then restart your job from the **Last Stopped Time**.
-
-After the authorization has been refreshed with Power BI, a green alert appears in the authorization area to reflect that the issue has been resolved.
-
-## Next steps
-* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
-* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
-* [Stream Analytics outputs](stream-analytics-define-outputs.md)
-* [Azure Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference)
-* [Azure Stream Analytics Management REST API reference](/rest/api/streamanalytics/)
-* [Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI](./powerbi-output-managed-identity.md)
stream-analytics Stream Analytics Stream Analytics Query Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-stream-analytics-query-patterns.md
Previously updated : 12/18/2019 Last updated : 08/29/2022
stream-analytics Stream Analytics Time Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-time-handling.md
Previously updated : 05/11/2020 Last updated : 08/22/2022 # Understand time handling in Azure Stream Analytics
stream-analytics Stream Analytics Window Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-window-functions.md
Title: Introduction to Azure Stream Analytics windowing functions
description: This article describes four windowing functions (tumbling, hopping, sliding, session) that are used in Azure Stream Analytics jobs. Previously updated : 03/16/2021 Last updated : 08/29/2022 # Introduction to Stream Analytics windowing functions
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
Title: Create an Azure file share with Azure Active Directory (preview)
+ Title: Create a profile container with Azure Files and Azure Active Directory (preview)
description: Set up an FSLogix profile container on an Azure file share in an existing Azure Virtual Desktop host pool with your Azure Active Directory domain (preview). Previously updated : 08/03/2022 Last updated : 08/29/2022 # Create a profile container with Azure Files and Azure Active Directory (preview)
In this article, you'll learn how to create an Azure Files share to store FSLogix profiles that can be accessed by hybrid user identities authenticated with Azure Active Directory (Azure AD). Azure AD users can now access an Azure file share using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. Your end-users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from Hybrid Azure AD-joined and Azure AD-joined VMs.
-In this article, you'll learn how to:
+This feature is currently supported in the Azure Public, Azure Government, and Azure China clouds.
-- Configure an Azure storage account for authentication using Azure AD.-- Configure the permissions on an Azure Files share.-- Configure your session hosts to store FSLogix user profiles on Azure Files.
+## Configure your Azure storage account and file share
-## Prerequisites
+To store your FSLogix profiles on an Azure file share:
-The Azure AD Kerberos functionality is only available on the following operating systems:
+1. [Create an Azure Storage account](../storage/files/storage-how-to-create-file-share.md#create-a-storage-account) if you don't already have one.
- - Windows 11 Enterprise single or multi-session.
- - Windows 10 Enterprise single or multi-session, versions 2004 or later with the latest cumulative updates installed, especially the [KB5007253 - 2021-11 Cumulative Update Preview for Windows 10](https://support.microsoft.com/topic/november-22-2021-kb5007253-os-builds-19041-1387-19042-1387-19043-1387-and-19044-1387-preview-d1847be9-46c1-49fc-bf56-1d469fc1b3af).
- - Windows Server, version 2022 with the latest cumulative updates installed, especially the [KB5007254 - 2021-11 Cumulative Update Preview for Microsoft server operating system version 21H2](https://support.microsoft.com/topic/november-22-2021-kb5007254-os-build-20348-380-preview-9a960291-d62e-486a-adcc-6babe5ae6fc1).
-
-The user accounts must be [hybrid user identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means you'll also need Active Directory Domain Services (AD DS) and Azure AD Connect. You must create these accounts in Active Directory and sync them to Azure AD. The service doesn't currently support environments where users are managed with Azure AD and optionally synced to Azure AD Directory Services.
-
-To assign Azure Role-Based Access Control (RBAC) permissions for the Azure file share to a user group, you must create the group in Active Directory and sync it to Azure AD.
-
-You must disable multi-factor authentication (MFA) on the Azure AD app representing the storage account.
-
-> [!IMPORTANT]
-> This feature is currently only supported in the Azure Public cloud.
-
-## Configure your Azure storage account
-
-Start by [creating an Azure Storage account](../storage/files/storage-how-to-create-file-share.md#create-a-storage-account) if you don't already have one.
-
-> [!NOTE]
-> Your Azure Storage account can't authenticate with both Azure AD and a second method like Active Directory Domain Services (AD DS) or Azure AD DS. You can only use one authentication method.
-
-Follow the instructions in the following sections to configure Azure AD authentication, configure the Azure AD service principal, and set the API permission for your storage account.
-
-### Configure Azure AD authentication on your Azure Storage account
--- Install the Azure Storage PowerShell module. This module provides management cmdlets for Azure Storage resources. It's required to create storage accounts, enable Azure AD authentication on the storage account, and retrieve the storage accountΓÇÖs Kerberos keys. To install the module, open PowerShell and run the following command:-
- ```powershell
- Install-Module -Name Az.Storage
- ```
--- Install the Azure AD PowerShell module. This module provides management cmdlets for Azure AD administrative tasks such as user and service principal management. To install this module, open PowerShell, then run the following command:-
- ```powershell
- Install-Module -Name AzureAD
- ```
-
- For more information, see [Install the Azure AD PowerShell module](/powershell/azure/active-directory/install-adv2).
--- Set the required variables for your tenant, subscription, storage account name and resource group name by running the following PowerShell cmdlets, replacing the values with the ones relevant to your environment.-
- ```powershell
- $tenantId = "<MyTenantId>"
- $subscriptionId = "<MySubscriptionId>"
- $resourceGroupName = "<MyResourceGroup>"
- $storageAccountName = "<MyStorageAccount>"
- ```
-
-- Enable Azure AD authentication on your storage account by running the following PowerShell cmdlets:-
- ```powershell
- Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
-
- $Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
-
- $json = @{properties=@{azureFilesIdentityBasedAuthentication=@{directoryServiceOptions="AADKERB"}}};
- $json = $json | ConvertTo-Json -Depth 99
-
- $token = $(Get-AzAccessToken).Token
- $headers = @{ Authorization="Bearer $token" }
-
- try {
- Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json;
- } catch {
- Write-Host $_.Exception.ToString()
- Write-Error -Message "Caught exception setting Storage Account directoryServiceOptions=AADKERB: $_" -ErrorAction Stop
- }
- ```
--- Generate the kerb1 storage account key for your storage account by running the following PowerShell command:-
- ```powershell
- New-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -KeyName kerb1 -ErrorAction Stop
- ```
-
-### Configure the Azure AD service principal and application
-
-To enable Azure AD authentication on a storage account, you need to create an Azure AD application to represent the storage account in Azure AD. This configuration won't be available in the Azure portal during public preview. If the cmdlets in the previous section auto-created the application on your behalf, you can skip this step. To verify if the application has already been created, run the following cmdlet:
-
-```powershell
-Get-AzureADServicePrincipal -Searchstring "[Storage Account] $storageAccountName.file.core.windows.net"
-```
-
-If you see an existing service principal with your storage account name, skip this section and proceed with [setting the API permissions on the application](#set-the-api-permissions-on-the-newly-created-application).
-
-If not, create the application using PowerShell, following these steps:
--- Set the password (service principal secret) based on the Kerberos key of the storage account. The Kerberos key is a password shared between Azure AD and Azure Storage. Kerberos derives the password's value from the first 32 bytes of the storage accountΓÇÖs kerb1 key. To set the password, run the following cmdlets:-
- ```powershell
- $kerbKey1 = Get-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -ListKerbKey | Where-Object { $_.KeyName -like "kerb1" }
- $aadPasswordBuffer = [System.Linq.Enumerable]::Take([System.Convert]::FromBase64String($kerbKey1.Value), 32);
- $password = "kk:" + [System.Convert]::ToBase64String($aadPasswordBuffer);
- ```
--- Connect to Azure AD and retrieve the tenant information by running the following cmdlets:-
- ```powershell
- Connect-AzureAD
- $azureAdTenantDetail = Get-AzureADTenantDetail;
- $azureAdTenantId = $azureAdTenantDetail.ObjectId
- $azureAdPrimaryDomain = ($azureAdTenantDetail.VerifiedDomains | Where-Object {$_._Default -eq $true}).Name
- ```
--- Generate the service principal names for the Azure AD service principal by running these cmdlets:-
- ```powershell
- $servicePrincipalNames = New-Object string[] 3
- $servicePrincipalNames[0] = 'HTTP/{0}.file.core.windows.net' -f $storageAccountName
- $servicePrincipalNames[1] = 'CIFS/{0}.file.core.windows.net' -f $storageAccountName
- $servicePrincipalNames[2] = 'HOST/{0}.file.core.windows.net' -f $storageAccountName
- ```
--- Create an application for the storage account by running this cmdlet:-
- ```powershell
- $application = New-AzureADApplication -DisplayName $storageAccountName -IdentifierUris $servicePrincipalNames -GroupMembershipClaims "All";
- ```
--- Create a service principal for the storage account by running this cmdlet:-
- ```powershell
- $servicePrincipal = New-AzureADServicePrincipal -AccountEnabled $true -AppId $application.AppId -ServicePrincipalType "Application";
- ```
--- Set the password for the storage account's service principal by running the following cmdlets.-
- ```powershell
- $Token = ([Microsoft.Open.Azure.AD.CommonLibrary.AzureSession]::AccessTokens['AccessToken']).AccessToken
- $Uri = ('https://graph.windows.net/{0}/{1}/{2}?api-version=1.6' -f $azureAdPrimaryDomain, 'servicePrincipals', $servicePrincipal.ObjectId)
- $json = @'
- {
- "passwordCredentials": [
- {
- "customKeyIdentifier": null,
- "endDate": "<STORAGEACCOUNTENDDATE>",
- "value": "<STORAGEACCOUNTPASSWORD>",
- "startDate": "<STORAGEACCOUNTSTARTDATE>"
- }]
- }
- '@
- $now = [DateTime]::UtcNow
- $json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddHours(-12).ToString("s")
- $json = $json -replace "<STORAGEACCOUNTENDDATE>", $now.AddMonths(6).ToString("s")
- $json = $json -replace "<STORAGEACCOUNTPASSWORD>", $password
- $Headers = @{'authorization' = "Bearer $($Token)"}
- try {
- Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method Patch -Headers $Headers -Body $json
- Write-Host "Success: Password is set for $storageAccountName"
- } catch {
- Write-Host $_.Exception.ToString()
- Write-Host "StatusCode: " $_.Exception.Response.StatusCode.value
- Write-Host "StatusDescription: " $_.Exception.Response.StatusDescription
- }
- ```
-
- > [!IMPORTANT]
- > This password expires every six months, so you must update it by following the steps in [Update the service principal's password](#update-the-service-principals-password).
-
-### Set the API permissions on the newly created application
-
-You can configure the API permissions from the [Azure portal](https://portal.azure.com).
-
-If the service principal was already created for you in the last section, follow these steps:
-
-1. Open **Azure Active Directory**.
-2. Select **App registrations** on the left pane.
-3. Select **All Applications**.
-4. Select the application with the name matching **[Storage Account] $storageAccountName.file.core.windows.net**.
-5. Select **API permissions** in the left pane.
-6. Select **Add permissions** at the bottom of the page.
-7. Select **Grant admin consent for "DirectoryName"**.
-
-If you created the service principal in the last section, follow these steps:
-
-1. Open **Azure Active Directory**.
-2. Select **App registrations** on the left pane.
-3. Select **All Applications**.
-4. Select the application with the name matching your storage account.
-5. Select **API permissions** in the left pane.
-6. Select **+ Add a permission**.
-7. Select **Microsoft Graph** at the top of the page.
-8. Select **Delegated permissions**.
-9. Select **openid** and **profile** under the **OpenID** permissions group.
-10. Select **User.Read** under the **User** permission group.
-11. Select **Add permissions** at the bottom of the page.
-12. Select **Grant admin consent for "DirectoryName"**.
-
-### Disable multi-factor authentication on the storage account
-
-Azure AD Kerberos doesn't support using MFA to access Azure Files shares configured with Azure AD Kerberos. You must exclude the Azure AD app representing your storage account from your MFA conditional access policies if they apply to all apps. The storage account app should have the same name as the storage account in the conditional access exclusion list.
-
-> [!IMPORTANT]
-> If you don't exclude MFA policies from the storage account app, the FSLogix profiles won't be able to attach. Trying to map the file share using *net use* will result in an error message that says "System error 1327: Account restrictions are preventing this user from signing in. For example: blank passwords aren't allowed, sign-in times are limited, or a policy restriction has been enforced."
-
-## Configure your Azure Files share
-
-To get started, [create an Azure Files share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) under your storage account to store your FSLogix profiles if you haven't already.
-
-Follow the instructions in the following sections to configure the share-level and directory-level permissions on your Azure Files share to provide the right level of access to your users.
-
-### Assign share-level permissions
-
-You must grant your users access to the file share before they can use it. There are two ways you can assign share-level permissions: either assign them to specific Azure AD users or user groups, or you can assign them to all authenticated identities as a default share-level permission. To learn more about assigning share-level permissions, see [Assign share-level permissions to an identity](../storage/files/storage-files-identity-ad-ds-assign-permissions.md).
-
-All users that need to have FSLogix profiles stored on the storage account you're using must be assigned the **Storage File Data SMB Share Contributor** role.
-
-> [!IMPORTANT]
-> Azure Virtual Desktop currently only supports assigning specific permissions to hybrid users and user groups. Users and user groups must be managed in Active Directory and synced to Azure AD using Azure AD Connect.
-
-### Assign directory level access permissions
-
-To prevent users from accessing the user profile of other users, you must also assign directory-level permissions. This section will give you a step-by-step guide for how to configure the permissions.
-
-> [!IMPORTANT]
-> Without proper directory level permissions in place, a user can delete the user profile or access the personal information of a different user. It's important to make sure users have proper permissions to prevent accidental deletion from happening.
-
-You can set permissions (ACLs) for files and directories using either the icacls command-line utility or Windows Explorer. The system you use to configure the permissions must meet the following requirements:
--- The version of Windows meets the supported OS requirements defined in the [Prerequisites](#prerequisites) section.-- Is Azure AD-joined or Hybrid Azure AD-joined to the same Azure AD tenant as the storage account.-- Has line-of-sight to the domain controller.-- Is domain-joined to your Active Directory (Windows Explorer method only).-
-During the public preview, configuring permissions using Windows Explorer also requires storage account configuration. You can skip this configuration step when using icacls.
-
-To configure your storage account:
-
-1. On a device that's domain-joined to the Active Directory, install the [ActiveDirectory PowerShell module](/powershell/module/activedirectory/?view=windowsserver2019-ps&preserve-view=true) if you haven't already.
-
-2. Set the required variables for your tenant, subscription, storage account name and resource group name by running the following PowerShell cmdlets, replacing the values with the ones relevant to your environment. You can skip this step if you've already set these values.
-
- ```powershell
- $tenantId = "<MyTenantId>"
- $subscriptionId = "<MySubscriptionId>"
- $resourceGroupName = "<MyResourceGroup>"
- $storageAccountName = "<MyStorageAccount>"
- ```
-
-3. Set the storage account's ActiveDirectoryProperties to support the Shell experience. Because Azure AD doesn't currently support configuring ACLs in Shell, it must instead rely on Active Directory. To configure Shell, run the following cmdlets in PowerShell:
-
- ```powershell
- Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
-
- $AdModule = Get-Module ActiveDirectory;
- if ($null -eq $AdModule) {
- Write-Error "Please install and/or import the ActiveDirectory PowerShell module." -ErrorAction Stop;
- }
- $domainInformation = Get-ADDomain
- $Domain = $domainInformation.DnsRoot
- $domainGuid = $domainInformation.ObjectGUID.ToString()
- $domainName = $domainInformation.DnsRoot
- $domainSid = $domainInformation.DomainSID.Value
- $forestName = $domainInformation.Forest
- $netBiosDomainName = $domainInformation.netBiosName
- $azureStorageSid = $domainSid + "-123454321";
-
- Write-Verbose "Setting AD properties on $storageAccountName in $resourceGroupName : `
- EnableActiveDirectoryDomainServicesForFile=$true, ActiveDirectoryDomainName=$domainName, `
- ActiveDirectoryNetBiosDomainName=$netBiosDomainName, ActiveDirectoryForestName=$($domainInformation.Forest) `
- ActiveDirectoryDomainGuid=$domainGuid, ActiveDirectoryDomainSid=$domainSid, `
- ActiveDirectoryAzureStorageSid=$azureStorageSid"
-
- $Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
-
- $json=
- @{
- properties=
- @{azureFilesIdentityBasedAuthentication=
- @{directoryServiceOptions="AADKERB";
- activeDirectoryProperties=@{domainName="$($domainName)";
- netBiosDomainName="$($netBiosDomainName)";
- forestName="$($forestName)";
- domainGuid="$($domainGuid)";
- domainSid="$($domainSid)";
- azureStorageSid="$($azureStorageSid)"}
- }
- }
- };
-
- $json = $json | ConvertTo-Json -Depth 99
-
- $token = $(Get-AzAccessToken).Token
- $headers = @{ Authorization="Bearer $token" }
-
- try {
- Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json
- } catch {
- Write-Host $_.Exception.ToString()
- Write-Host "Error setting Storage Account AD properties. StatusCode:" $_.Exception.Response.StatusCode.value__
- Write-Host "Error setting Storage Account AD properties. StatusDescription:" $_.Exception.Response.StatusDescription
- Write-Error -Message "Caught exception setting Storage Account AD properties: $_" -ErrorAction Stop
- }
- ```
-
-Enable Azure AD Kerberos functionality by configuring the group policy or registry value in the following list:
--- Group policy: `Administrative Templates\System\Kerberos\Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon`-- Registry value: `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 1`-
-Next, make sure you can retrieve a Kerberos Ticket Granting Ticket (TGT) by following these instructions:
-
-1. Open a command window.
-2. Run the following command:
-
- ```
- dsregcmd /RefreshPrt
- ```
-
-3. Lock and then unlock your device using the same user account.
-4. In the command window, run the following commands:
-
- ```
- klist purge
- klist get krbtgt
- klist
- ```
+ > [!NOTE]
+ > Your Azure Storage account can't authenticate with both Azure AD and a second method like Active Directory Domain Services (AD DS) or Azure AD DS. You can only use one authentication method.
-5. Confirm you have a Kerberos TGT by looking for an item with a server property of `krbtgt/KERBEROS.MICROSOFTONLINE.COM @ KERBEROS.MICROSOFTONLINE.COM`.
-6. Verify you can mount the network share by running the following command in your command window:
+2. [Create an Azure Files share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) under your storage account to store your FSLogix profiles if you haven't already.
- ```
- net use <DriveLetter>: \\<storage-account-name>.file.core.windows.net\<fIle-share-name>
- ```
+3. [Enable Azure Active Directory Kerberos authentication on Azure Files](../storage/files/storage-files-identity-auth-azure-active-directory-enable.md) to enable access from Azure AD-joined VMs.
-Finally, follow the instructions in [Configure directory and file level permissions](../storage/files/storage-files-identity-ad-ds-configure-permissions.md) to finish configuring your permissions with icacls or Windows Explorer. Learn more about the recommended list of permissions for FSLogix profiles at [Configure the storage permissions for profile containers](/fslogix/fslogix-storage-config-ht).
+ - When configuring the directory and file-level permissions, review the recommended list of permissions for FSLogix profiles at [Configure the storage permissions for profile containers](/fslogix/fslogix-storage-config-ht).
+ - Without proper directory-level permissions in place, a user can delete the user profile or access the personal information of a different user. It's important to make sure users have proper permissions to prevent accidental deletion from happening.
## Configure the session hosts To access Azure file shares from an Azure AD-joined VM for FSLogix profiles, you must configure the session hosts. To configure session hosts:
-1. Enable the Azure AD Kerberos functionality by configuring the group policy or registry value with the values in the following list. Once you've configured those values, restart your system to make the changes take effect.
+1. Enable the Azure AD Kerberos functionality using one of the following methods.
- - Group policy: `Administrative Templates\System\Kerberos\Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon`
- - Registry value: `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 1`
+ - Configure this Intune [Policy CSP](/windows/client-management/mdm/policy-configuration-service-provider) and apply it to the session host: [Kerberos/CloudKerberosTicketRetrievalEnabled](/windows/client-management/mdm/policy-csp-kerberos#kerberos-cloudkerberosticketretrievalenabled)
+ - Configure this Group policy on the session host: `Administrative Templates\System\Kerberos\Allow retrieving the Azure AD Kerberos Ticket Granting Ticket during logon`
+ - Create the following registry value on the session host: `reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 1`
2. When you use Azure AD with a roaming profile solution like FSLogix, the credential keys in Credential Manager must belong to the profile that's currently loading. This will let you load your profile on many different VMs instead of being limited to just one. To enable this setting, create a new registry value by running the following command:
Once you've installed and configured FSLogix, you can test your deployment by si
If the user has signed in before, they'll have an existing local profile that the service will use during this session. To avoid creating a local profile, either create a new user account to use for tests or use the configuration methods described in [Tutorial: Configure profile container to redirect user profiles](/fslogix/configure-profile-container-tutorial/) to enable the *DeleteLocalProfileWhenVHDShouldApply* setting.
-Finally, test the profile to make sure that it works:
+Finally, verify the profile created in Azure Files after the user has successfully signed in:
1. Open the Azure portal and sign in with an administrative account.
Finally, test the profile to make sure that it works:
6. If everything's set up correctly, you should see a directory with a name that's formatted like this: `<user SID>_<username>`.
-## Update the service principal's password
-
-The service principal's password will expire every six months. To update the password:
-
-1. Install the Azure Storage and Azure AD PowerShell module. To install the modules, open PowerShell and run the following commands:
-
- ```powershell
- Install-Module -Name Az.Storage
- Install-Module -Name AzureAD
- ```
-
-2. Set the required variables for your tenant, subscription, storage account name, and resource group name by running the following PowerShell cmdlets, replacing the values with the ones relevant to your environment.
-
- ```powershell
- $tenantId = "<MyTenantId>"
- $subscriptionId = "<MySubscriptionId>"
- $resourceGroupName = "<MyResourceGroup>"
- $storageAccountName = "<MyStorageAccount>"
- ```
-
-3. Generate a new kerb1 key and password for the service principal by running this command:
-
- ```powershell
- Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
- $kerbKeys = New-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -KeyName "kerb1" -ErrorAction Stop | Select-Object -ExpandProperty Keys
- $kerbKey = $kerbKeys | Where-Object { $_.KeyName -eq "kerb1" } | Select-Object -ExpandProperty Value
- $azureAdPasswordBuffer = [System.Linq.Enumerable]::Take([System.Convert]::FromBase64String($kerbKey), 32);
- $password = "kk:" + [System.Convert]::ToBase64String($azureAdPasswordBuffer);
- ```
-
-4. Connect to Azure AD and retrieve the tenant information, application, and service principal by running the following cmdlets:
-
- ```powershell
- Connect-AzureAD
- $azureAdTenantDetail = Get-AzureADTenantDetail;
- $azureAdTenantId = $azureAdTenantDetail.ObjectId
- $azureAdPrimaryDomain = ($azureAdTenantDetail.VerifiedDomains | Where-Object {$_._Default -eq $true}).Name
- $application = Get-AzureADApplication -Filter "DisplayName eq '$($storageAccountName)'" -ErrorAction Stop;
- $servicePrincipal = Get-AzureADServicePrincipal -Filter "AppId eq '$($application.AppId)'"
- if ($servicePrincipal -eq $null) {
- Write-Host "Could not find service principal corresponding to application with app id $($application.AppId)"
- Write-Error -Message "Make sure that both service principal and application exist and are correctly configured" -ErrorAction Stop
- }
- ```
-
-5. Set the password for the storage account's service principal by running the following cmdlets.
-
- ```powershell
- $Token = ([Microsoft.Open.Azure.AD.CommonLibrary.AzureSession]::AccessTokens['AccessToken']).AccessToken;
- $Uri = ('https://graph.windows.net/{0}/{1}/{2}?api-version=1.6' -f $azureAdPrimaryDomain, 'servicePrincipals', $servicePrincipal.ObjectId)
- $json = @'
- {
- "passwordCredentials": [
- {
- "customKeyIdentifier": null,
- "endDate": "<STORAGEACCOUNTENDDATE>",
- "value": "<STORAGEACCOUNTPASSWORD>",
- "startDate": "<STORAGEACCOUNTSTARTDATE>"
- }]
- }
- '@
-
- $now = [DateTime]::UtcNow
- $json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddHours(-12).ToString("s")
- $json = $json -replace "<STORAGEACCOUNTENDDATE>", $now.AddMonths(6).ToString("s")
- $json = $json -replace "<STORAGEACCOUNTPASSWORD>", $password
-
- $Headers = @{'authorization' = "Bearer $($Token)"}
-
- try {
- Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method Patch -Headers $Headers -Body $json
- Write-Host "Success: Password is set for $storageAccountName"
- } catch {
- Write-Host $_.Exception.ToString()
- Write-Host "StatusCode: " $_.Exception.Response.StatusCode.value
- Write-Host "StatusDescription: " $_.Exception.Response.StatusDescription
- }
- ```
-
-## Disable Azure AD authentication on your Azure Storage account
-
-If you need to disable Azure AD authentication on your storage account:
--- Set the required variables for your tenant, subscription, storage account name and resource group name by running the following PowerShell cmdlets, replacing the values with the ones relevant to your environment.-
- ```powershell
- $tenantId = "<MyTenantId>"
- $subscriptionId = "<MySubscriptionId>"
- $resourceGroupName = "<MyResourceGroup>"
- $storageAccountName = "<MyStorageAccount>"
- ```
--- Run the following cmdlets in PowerShell to disable Azure AD authentication on your storage account:-
- ```powershell
- Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
- $Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
-
- $json = @{properties=@{azureFilesIdentityBasedAuthentication=@{directoryServiceOptions="None"}}};
- $json = $json | ConvertTo-Json -Depth 99
-
- $token = $(Get-AzAccessToken).Token
- $headers = @{ Authorization="Bearer $token" }
-
- try {
- Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json;
- } catch {
- Write-Host $_.Exception.ToString()
- Write-Host "Error setting Storage Account directoryServiceOptions=None. StatusCode:" $_.Exception.Response.StatusCode.value__
- Write-Host "Error setting Storage Account directoryServiceOptions=None. StatusDescription:" $_.Exception.Response.StatusDescription
- Write-Error -Message "Caught exception setting Storage Account directoryServiceOptions=None: $_" -ErrorAction Stop
- }
- ```
- ## Next steps - To troubleshoot FSLogix, see [this troubleshooting guide](/fslogix/fslogix-trouble-shooting-ht).
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
+ Last updated 10/17/2016
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
+ Last updated 04/25/2018
virtual-machines Features Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-linux.md
Previously updated : 03/30/2018 Last updated : 05/24/2022+
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
Create the SAP configuration and software installation pipeline by choosing _New
Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP configuration and software installation' by choosing 'Rename/Move' from the three-dot menu on the right.
+## Configuration Web App pipeline
+
+Create the Configuration Web App pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
+
+| Setting | Value |
+| - | -- |
+| Branch | main |
+| Path | `deploy/pipelines/21-deploy-web-app.yaml` |
+| Name | Configuration Web App |
+
+Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Configuration Web App' by choosing 'Rename/Move' from the three-dot menu on the right.
+ ## Deployment removal pipeline Create the deployment removal pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings:
Create a new variable group 'SDAF-General' using the Library page in the Pipelin
| Variable | Value | Notes | | - | | - |
-| `ANSIBLE_HOST_KEY_CHECKING` | false | |
| Deployment_Configuration_Path | WORKSPACES | For testing the sample configuration use 'samples/WORKSPACES' instead of WORKSPACES. | | Branch | main | | | S-Username | `<SAP Support user account name>` | | | S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon. |
-| `PAT` | `<Personal Access Token>` | Use the Personal Token defined in the previous step. |
-| `POOL` | `<Agent Pool name>` | Use the Agent pool defined in the previous step. |
-| `advice.detachedHead` | false | |
-| `skipComponentGovernanceDetection` | true | |
| `tf_version` | 1.2.6 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) | Save the variables.
Enter a Service connection name, for instance 'Connection to MGMT subscription'
:::image type="content" source="./media/automation-devops/automation-repo-permissions.png" alt-text="Picture showing repository permissions":::
-## Register the Deployer as a self-hosted agent for Azure DevOps
-
-You must use the Deployer as a [self-hosted agent for Azure DevOps](/azure/devops/pipelines/agents/v2-linux) to perform the Ansible configuration activities. As a one-time step, you must register the Deployer as a self-hosted agent.
-- ## Deploy the Control Plane Newly created pipelines might not be visible in the default view. Select on recent tab and go back to All tab to view the new pipelines.
-Select the _Control plane deployment_ pipeline, provide the configuration names for the deployer and the SAP library and choose "Run" to deploy the control plane. Make sure to check "deploy the web app infrastructure" if you would like to set up the web app.
+Select the _Control plane deployment_ pipeline, provide the configuration names for the deployer and the SAP library and choose "Run" to deploy the control plane. Make sure to check ""Deploy the configuration web application" if you would like to set up the configuration web app.
-Wait for the deployment to finish.
-## Configure the Azure DevOps Services self-hosted agent
+### Configure the Azure DevOps Services self-hosted agent manually
+
+> [!NOTE]
+>This is only needed if the Azure DevOps Services agent is not automatically configured. Please check that the agent pool is empty before proceeding.
+ Connect to the deployer by following these steps:
The agent will now be configured and started.
Checking the "deploy the web app infrastructure" parameter when running the Control plane deployment pipeline will provision the infrastructure necessary for hosting the web app. The "Deploy web app" pipeline will publish the application's software to that infrastructure.
-Before running the Deploy web app pipeline, first update the reply-url values for the app registration. As a result of running the SAP workload zone deployment pipeline, part of the web app URL needed will be stored in a variable named "WEBAPP_URL_BASE" in your environment-specific variable group. Copy this value, and use it in the following command:
+Wait for the deployment to finish. Once the deployment is complete, navigate to the Extensions tab and follow the instructions to finalize the configuration and update the reply-url values for the app registration.
+
+As a result of running the SAP workload zone deployment pipeline, part of the web app URL needed will be stored in a variable named "WEBAPP_URL_BASE" in your environment-specific variable group. Copy this value, and use it in the following command:
# [Linux](#tab/linux)
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
ANF_service_level = "Ultra"
> | `iscsi_subnet_name` | The name of the `iscsi` subnet. | Optional | | > | `iscsi_subnet_address_prefix` | The address range for the `iscsi` subnet. | Mandatory | For green field deployments. | > | `iscsi_subnet_arm_id` | The Azure resource identifier for the `iscsi` subnet. | Mandatory | For brown field deployments. |
-> | `iscsi_subnet_nsg_name` | The name of the `iscsi` Network Security Group name | Optional | |
+> | `iscsi_subnet_nsg_name` | The name of the `iscsi` Network Security Group name | Optional | |
> | `iscsi_subnet_nsg_arm_id` | The Azure resource identifier for the `iscsi` Network Security Group | Mandatory | For brown field deployments. | > | `iscsi_count` | The number of iSCSI Virtual Machines | Optional | | > | `iscsi_use_DHCP` | Controls whether to use dynamic IP addresses provided by the Azure subnet | Optional | |
ANF_service_level = "Ultra"
> | `iscsi_authentication_type` | Defines the default authentication for the iSCSI Virtual Machines | Optional | | > | `iscsi__authentication_username` | Administrator account name | Optional | | > | `iscsi_nic_ips` | IP addresses for the iSCSI Virtual Machines | Optional | ignored if `iscsi_use_DHCP` is defined |
-
++
+## Utility VM Parameters
++
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | -- | - | | - |
+> | `utility_vm_count` | Defines the number of Utility virtual machines to deploy. | Optional | Use the utility virtual machine to host SAPGui |
+> | `utility_vm_size` | Defines the SKU for the Utility virtual machines. | Optional | Default: Standard_D4ds_v4 |
+> | `utility_vm_useDHCP` | Defines if Azure subnet provided IPs should be used. | Optional | |
+> | `utility_vm_image` | Defines the virtual machine image to use. | Optional | Default: Windows Server 2019 |
+> | `utility_vm_nic_ips` | Defines the IP addresses for the virtual machines. | Optional | |
+ ## Terraform Parameters
virtual-machines Automation Devops Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-devops-tutorial.md
You'll perform the following tasks during this lab:
- A configured Azure DevOps instance, follow the steps here [Configure Azure DevOps Services for SAP Deployment Automation](automation-configure-devops.md) -- For the 'SAP software acquisition' and the 'Configuration and SAP installation' pipelines a configured self hosted agent, see [Configure a self-hosted agent for SAP Deployment Automation](automation-configure-devops.md#register-the-deployer-as-a-self-hosted-agent-for-azure-devops)
+- For the 'SAP software acquisition' and the 'Configuration and SAP installation' pipelines a configured self hosted agent.
> [!Note] > The self hosted agent virtual machine will be deployed as part of the control plane deployment.
virtual-machines Automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-plan-deployment.md
The automation framework also defines the credentials for the default virtual ma
| [Service principal](#service-principal-creation) | Environment | Deployer's key vault | Environment identifier | Deployment credentials. | | VM credentials | Environment | Workload's key vault | Environment identifier | Sets the default VM user information. |
+### SAP and Virtual machine Credentials management
+
+The automation framework will use the workload zone key vault for storing both the automation user credentials and the SAP system credentials. The virtual machine credentials are named as follows:
+
+| Credential | Name | Example |
+| | - | - |
+| Private key | [IDENTIFIER]-sshkey | DEV-WEEU-SAP01-sid-sshkey |
+| Public key | [IDENTIFIER]-sshkey-pub | DEV-WEEU-SAP01-sid-sshkey-pub |
+| Username | [IDENTIFIER]-username | DEV-WEEU-SAP01-sid-username |
+| Password | [IDENTIFIER]-password | DEV-WEEU-SAP01-sid-password |
+| sidadm Password | [IDENTIFIER]-[SID]-sap-password | DEV-WEEU-SAP01-X00-sap-password |
++ ### Service principal creation Create your service principal:
You'll be creating or granting access to the following services in each workload
* Azure Virtual Networks, for virtual networks, subnets and network security groups. * Azure Key Vault, for system credentials and the deployment Service Principal. * Azure Storage accounts, for Boot Diagnostics and Cloud Witness.
+* Shared storage for the SAP Systems either Azure Files or Azure NetApp Files.
Before you design your workload zone layout, consider the following questions:
When planning a deployment, it's important to consider the overall flow. There a
1. Creating the deployment environment 1. Creating shared storage for Terraform state files 1. Creating shared storage for SAP installation media
-1. Preparing the workload zone. This step deploys the [workload zone components](#workload-zone-structure), such as the virtual network and key vaults.
-1. Deploying the system. This step includes the [infrastructure for the SAP system](#sap-system-setup).
+
+1. Deploy the workload zone. This step deploys the [workload zone components](#workload-zone-structure), such as the virtual network and key vaults.
+
+1. Deploy the system. This step includes the [infrastructure for the SAP system](#sap-system-setup) deployment and the SAP configuration [configuration and SAP installation](automation-run-ansible.md).
## Orchestration environment For the automation framework, you must execute templates and scripts from one of the following supported environments:
-* Azure Cloud Shell
+* Azure DevOps
* An Azure-hosted Linux VM
+* Azure Cloud Shell
* PowerShell on your local Windows computer ## Naming conventions
-The automation framework uses a default naming convention. If you'd like to use a custom naming convention, plan and define your custom names before deployment.
+The automation framework uses a default naming convention. If you'd like to use a custom naming convention, plan and define your custom names before deployment. For more information, see [how to configure the naming convention](automation-naming-module.md).
## Disk sizing
virtual-machines Automation Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-supportability.md
The automation framework uses or can use the following Azure services, features,
- New or existing key vaults - Customer-managed keys for disk encryption - Azure Application Security Groups (ASG)
+- Azure Files for NFS
- Azure NetApp Files (ANF) - For shared files
+ - For database files
## Unsupported Azure features At this time the automation framework **doesn't support** the following Azure services, features, or capabilities: -- Azure Files for NFS-- Azure NetApp Files (ANF)
- - For database files
## Next steps
virtual-wan Cross Tenant Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/cross-tenant-vnet.md
Title: 'Connect cross-tenant VNets to a hub:PowerShell'
+ Title: 'Connect cross-tenant virtual networks to a hub: PowerShell'
-description: This article helps you connect cross-tenant VNets to a virtual hub using PowerShell.
+description: This article helps you connect cross-tenant virtual networks to a virtual hub by using PowerShell.
Last updated 09/28/2020
-# Connect cross-tenant VNets to a Virtual Wan hub
+# Connect cross-tenant virtual networks to a Virtual WAN hub
-This article helps you use Virtual WAN to connect a VNet to a virtual hub in a different tenant. This architecture is useful if you have client workloads that must be connected to be the same network, but are on different tenants. For example, as shown in the following diagram, you can connect a non-Contoso VNet (the Remote Tenant) to a Contoso virtual hub (the Parent Tenant).
+This article helps you use Azure Virtual WAN to connect a virtual network to a virtual hub in a different tenant. This architecture is useful if you have client workloads that must be connected to be the same network but are on different tenants. For example, as shown in the following diagram, you can connect a non-Contoso virtual network (the remote tenant) to a Contoso virtual hub (the parent tenant).
In this article, you learn how to: * Add another tenant as a Contributor on your Azure subscription.
-* Connect a cross tenant VNet to a virtual hub.
+* Connect a cross-tenant virtual network to a virtual hub.
-The steps for this configuration are performed using a combination of the Azure portal and PowerShell. However, the feature itself is available in PowerShell and CLI only.
+The steps for this configuration use a combination of the Azure portal and PowerShell. However, the feature itself is available in PowerShell and the Azure CLI only.
>[!NOTE]
-> Please note that cross-tenant Virtual Network connections can only be managed through PowerShell or CLI. You **cannot** manage cross-tenant Virtual Network Connections in Azure portal.
->
-## Before You Begin
+> You can manage cross-tenant virtual network connections only through PowerShell or the Azure CLI. You *cannot* manage cross-tenant virtual network connections in the Azure portal.
+
+## Before you begin
### Prerequisites To use the steps in this article, you must have the following configuration already set up in your environment:
-* A virtual WAN and virtual hub in your parent subscription.
-* A virtual network configured in a subscription in a different (remote) tenant.
-* Make sure that the VNet address space in the remote tenant does not overlap with any other address space within any other VNets already connected to the parent virtual hub.
+* A virtual WAN and virtual hub in your parent subscription
+* A virtual network configured in a subscription in a different (remote) tenant
+
+Make sure that the virtual network address space in the remote tenant does not overlap with any other address space within any other virtual networks already connected to the parent virtual hub.
### Working with Azure PowerShell
To use the steps in this article, you must have the following configuration alre
## <a name="rights"></a>Assign permissions
-In order for the user administering the parent subscription with the virtual hub to be able to modify and access the virtual networks in the remote tenant, you need to assign **Contributor** permissions to this user. Assigning **Contributor** permissions to this user is done in the subscription of the VNET in the remote tenant.
+1. In the subscription of the virtual network in the remote tenant, add the Contributor role assignment to the administrator (the user who administers the virtual hub). Contributor permissions will enable the administrator to modify and access the virtual networks in the remote tenant.
-1. Add the **Contributor** role assignment to the administrator (the one used to administer the virtual WAN hub). You can use either PowerShell, or the Azure portal to assign this role. See the following **Add or remove role assignments** articles for steps:
+ You can use either PowerShell or the Azure portal to assign this role. See the following articles for steps:
- * [PowerShell](../role-based-access-control/role-assignments-powershell.md)
- * [Portal](../role-based-access-control/role-assignments-portal.md)
+ * [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
+ * [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
-1. Next, add the remote tenant subscription and the parent tenant subscription to the current session of PowerShell. Run the following command. If you are signed into the parent, you only need to run the command for the remote tenant.
+1. Run the following command to add the remote tenant subscription and the parent tenant subscription to the current session of PowerShell. If you're signed in to the parent, you need to run the command for only the remote tenant.
```azurepowershell-interactive Connect-AzAccount -SubscriptionId "[subscription ID]" -TenantId "[tenant ID]" ```
-1. Verify that the role assignment is successful by logging into Azure PowerShell using the parent credentials, and running the following command:
+1. Verify that the role assignment is successful. Sign in to Azure PowerShell by using the parent credentials and run the following command:
```azurepowershell-interactive Get-AzSubscription ```
-1. If the permissions have successfully propagated to the parent and have been added to the session, the subscription owned by the parent **and** remote tenant will both show up in the output of the command.
+ If the permissions have successfully propagated to the parent and have been added to the session, the subscriptions owned by the parent and the remote tenant will both appear in the output of the command.
-## <a name="connect"></a>Connect VNet to hub
+## <a name="connect"></a>Connect a virtual network to a hub
-In the following steps, you will switch between the context of the two subscriptions as you link the virtual network to the virtual hub. Replace the example values to reflect your own environment.
+In the following steps, you'll use commands to switch between the context of the two subscriptions as you link the virtual network to the virtual hub. Replace the example values to reflect your own environment.
-1. Make sure you are in the context of your remote account by running the following command:
+1. Make sure you're in the context of your remote account:
```azurepowershell-interactive Select-AzSubscription -SubscriptionId "[remote ID]" ```
-1. Create a local variable to store the metadata of the virtual network that you want to connect to the hub.
+1. Create a local variable to store the metadata of the virtual network that you want to connect to the hub:
```azurepowershell-interactive $remote = Get-AzVirtualNetwork -Name "[vnet name]" -ResourceGroupName "[resource group name]" ```
-1. Switch back over to the parent account.
+1. Switch back to the parent account:
```azurepowershell-interactive Select-AzSubscription -SubscriptionId "[parent ID]" ```
-1. Connect the VNet to the hub.
+1. Connect the virtual network to the hub:
```azurepowershell-interactive New-AzVirtualHubVnetConnection -ResourceGroupName "[parent resource group name]" -VirtualHubName "[virtual hub name]" -Name "[name of connection]" -RemoteVirtualNetwork $remote ```
-1. You can view the new connection in either PowerShell, or the Azure portal.
+You can view the new connection in either PowerShell or the Azure portal:
- * **PowerShell:** The metadata from the newly formed connection will show in the PowerShell console if the connection was successfully formed.
- * **Azure portal:** Navigate to the virtual hub, **Connectivity -> Virtual Network Connections**. You can view the pointer to the connection. To see the actual resource you will need the proper permissions.
+* In the PowerShell console, the metadata from the newly formed connection will appear if the connection was successfully formed.
+* In the Azure portal, go to the virtual hub and select **Connectivity** > **Virtual Network Connections**. You can then view the pointer to the connection. To see the actual resource, you'll need the proper permissions.
-## Scenario: add static routes to virtual network hub connection
-In the following steps, you will add a static route to the virtual hub default route table and virtual network connection to point to a next hop ip address (i.e NVA appliance).
-- Replace the example values to reflect your own environment.
+## Scenario: Add static routes to a virtual network hub connection
-1. Make sure you are in the context of your parent account by running the following command:
+In the following steps, you'll use commands to add a static route to the virtual hub's default route table and a virtual network connection to point to a next-hop IP address (that is, an NVA appliance). Replace the example values to reflect your own environment.
- ```azurepowershell-interactive
-Select-AzSubscription -SubscriptionId "[parent ID]"
-```
+>[!NOTE]
+>- Before you run the commands, make sure you have access and are authorized to the remote subscription.
+>- The destination prefix can be one CIDR or multiple ones. For a single CIDR, use this format: `@("10.19.2.0/24")`. For multiple CIDRs, use this format: `@("10.19.2.0/24", "10.40.0.0/16")`.
-2. Add route in the Virtual hub default route table without a specific ip address and next hop as the virtual hub connection by:
+1. Make sure you're in the context of your parent account:
- 2.1 get the connection details:
- ```azurepowershell-interactive
- $hubVnetConnection = Get-AzVirtualHubVnetConnection -Name "[HubconnectionName]" -ParentResourceName "[Hub Name]" -ResourceGroupName "[resource group name]"
- ```
- 2.2 add a static route to the virtual hub route table (next hop is hub vnet connection):
- ```azurepowershell-interactive
- $Route2 = New-AzVHubRoute -Name "[Route Name]" -Destination ΓÇ£[@("Destination prefix")]ΓÇ¥ -DestinationType "CIDR" -NextHop $hubVnetConnection.Id -NextHopType "ResourceId"
- ```
- 2.3 update the current hub default route table:
- ```azurepowershell-interactive
- Update-AzVHubRouteTable -ResourceGroupName "[resource group name]"-VirtualHubName [ΓÇ£Hub NameΓÇ¥] -Name "defaultRouteTable" -Route @($Route2)
- ```
- ## Customize static routes to specify next hop as an IP address for the virtual hub connection.
+ ```azurepowershell-interactive
+ Select-AzSubscription -SubscriptionId "[parent ID]"
+ ```
- 2.4 update the route in the vnethub connection:
- ```azurepowershell-interactive
- $newroute = New-AzStaticRoute -Name "[Route Name]" -AddressPrefix "[@("Destination prefix")]" -NextHopIpAddress "[Destination NVA IP address]"
+2. Add a route in the virtual hub's default route table without a specific IP address.
- $newroutingconfig = New-AzRoutingConfiguration -AssociatedRouteTable $hubVnetConnection.RoutingConfiguration.AssociatedRouteTable.id -Id $hubVnetConnection.RoutingConfiguration.PropagatedRouteTables.Ids[0].id -Label @("default") -StaticRoute @($newroute)
+ 1. Get the connection details:
- Update-AzVirtualHubVnetConnection -ResourceGroupName $rgname -VirtualHubName "[Hub Name]" -Name "[Virtual hub connection name]" -RoutingConfiguration $newroutingconfig
+ ```azurepowershell-interactive
+ $hubVnetConnection = Get-AzVirtualHubVnetConnection -Name "[HubconnectionName]" -ParentResourceName "[Hub Name]" -ResourceGroupName "[resource group name]"
+ ```
+ 1. Add a static route to the virtual hub's route table. (The next hop is a virtual network connection.)
- ```
- 2.5 verify static route is established to a next hop IP address:
+ ```azurepowershell-interactive
+ $Route2 = New-AzVHubRoute -Name "[Route Name]" -Destination ΓÇ£[@("Destination prefix")]ΓÇ¥ -DestinationType "CIDR" -NextHop $hubVnetConnection.Id -NextHopType "ResourceId"
+ ```
+ 1. Update the hub's current default route table:
+
+ ```azurepowershell-interactive
+ Update-AzVHubRouteTable -ResourceGroupName "[resource group name]"-VirtualHubName [ΓÇ£Hub NameΓÇ¥] -Name "defaultRouteTable" -Route @($Route2)
+ ```
+
+ 1. Update the route in the virtual network connection to specify the next hop as an IP address.
- ```azurepowershell-interactive
- Get-AzVirtualHubVnetConnection -ResourceGroupName "[Resource group]" -VirtualHubName "[virtual hub name]" -Name "[Virtual hub connection name]"
- ```
+ > [!NOTE]
+ > The route name should be the same as the one you used when you added a static route earlier. Otherwise, you'll create two routes in the routing table: one without an IP address and one with an IP address.
+ ```azurepowershell-interactive
+ $newroute = New-AzStaticRoute -Name "[Route Name]" -AddressPrefix "[@("Destination prefix")]" -NextHopIpAddress "[Destination NVA IP address]"
->[!NOTE]
->- In step 2.2 and 2.4 the route name should be same otherwise it will create two routes one without ip address one with ip address in the routing table.
->- If you run 2.5 it will remove the previous manual config route in your routing table.
->- Make sure you have access and are authorized to the remote subscription as well when running the above.
->- Destination prefix can be one CIDR or multiple ones
->- Please use this format @("10.19.2.0/24") or @("10.19.2.0/24", "10.40.0.0/16") for multiple CIDR
->
+ $newroutingconfig = New-AzRoutingConfiguration -AssociatedRouteTable $hubVnetConnection.RoutingConfiguration.AssociatedRouteTable.id -Id $hubVnetConnection.RoutingConfiguration.PropagatedRouteTables.Ids[0].id -Label @("default") -StaticRoute @($newroute)
+ Update-AzVirtualHubVnetConnection -ResourceGroupName $rgname -VirtualHubName "[Hub Name]" -Name "[Virtual hub connection name]" -RoutingConfiguration $newroutingconfig
-
-## <a name="troubleshoot"></a>Troubleshooting
+ ```
+
+ This update command will remove the previous manual configuration route in your routing table.
+
+ 1. Verify that the static route is established to a next-hop IP address.
-* Verify that the metadata in $remote (from the preceding [section](#connect)) matches the information from the Azure portal.
-* You can verify permissions using the IAM settings of the remote tenant resource group, or using Azure PowerShell commands (Get-AzSubscription).
-* Make sure quotes are included around the names of resource groups or any other environment-specific variables (eg. "VirtualHub1" or "VirtualNetwork1").
+ ```azurepowershell-interactive
+ Get-AzVirtualHubVnetConnection -ResourceGroupName "[Resource group]" -VirtualHubName "[virtual hub name]" -Name "[Virtual hub connection name]"
+ ```
-## Next steps
+## <a name="troubleshoot"></a>Troubleshoot
+
+* Verify that the metadata in `$remote` (from the [preceding section](#connect)) matches the information from the Azure portal.
+* Verify permissions by using the IAM settings of the remote tenant resource group, or by using Azure PowerShell commands (`Get-AzSubscription`).
+* Make sure quotes are included around the names of resource groups or any other environment-specific variables (for example, `"VirtualHub1"` or `"VirtualNetwork1"`).
-For more information about Virtual WAN, see:
+## Next steps
-* The Virtual WAN [FAQ](virtual-wan-faq.md)
+- For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
To help configure your VPN device, refer to the links that correspond to the app
| | | | | | | A10 Networks, Inc. |Thunder CFW |ACOS 4.1.1 |Not compatible |[Configuration guide](https://www.a10networks.com/wp-content/uploads/A10-DG-16161-EN.pdf)| | AhnLab | TrusGuard | TG 2.7.6<br>TG 3.5.x | Not tested | [Configuration guide](https://help.ahnlab.com/trusguard/cloud/azure/install/en_us/start.htm)
-| Allied Telesis |AR Series VPN Routers |AR-Series 5.4.7+ | [Configuration guide](https://www.alliedtelesis.com/documents/how-to/configure/site-to-site-vpn-between-azure-and-ar-series-router) |[Configuration guide](https://www.alliedtelesis.com/documents/how-to/configure/site-to-site-vpn-between-azure-and-ar-series-router)|
+| Allied Telesis |AR Series VPN Routers |AR-Series 5.4.7+ | [Configuration guide](https://www.alliedtelesis.com/configure/site-to-site-vpn-between-azure-and-ar-series-router) |[Configuration guide](https://www.alliedtelesis.com/configure/site-to-site-vpn-between-azure-and-ar-series-router)|
| Arista | CloudEOS Router | vEOS 4.24.0FX | Not tested | [Configuration guide](https://www.arista.com/en/cg-veos-router/veos-router-cloudeos-ipsec-connectivity-to-azure-virtual-network-gateway) | | Barracuda Networks, Inc. |Barracuda CloudGen Firewall |PolicyBased: 5.4.3<br>RouteBased: 6.2.0 |[Configuration guide](https://campus.barracuda.com/product/cloudgenfirewall/doc/79462887/how-to-configure-an-ikev1-ipsec-site-to-site-vpn-to-the-static-microsoft-azure-vpn-gateway/) |[Configuration guide](https://campus.barracuda.com/product/cloudgenfirewall/doc/79462889/how-to-configure-bgp-over-ikev2-ipsec-site-to-site-vpn-to-an-azure-vpn-gateway/) | | Check Point |Security Gateway |R80.10 |[Configuration guide](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk101275) |[Configuration guide](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk101275) |
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
The Microsoft Threat Intelligence Collection rules are written in partnership wi
When you use DRS 2.0 or later, your WAF uses *anomaly scoring*. Traffic that matches any rule isn't immediately blocked, even when your WAF is in prevention mode. Instead, the OWASP rule sets define a severity for each rule: *Critical*, *Error*, *Warning*, or *Notice*. The severity affects a numeric value for the request, which is called the *anomaly score*:
-| Rule severity | Values contributes to anomaly score |
+| Rule severity | Value contributed to anomaly score |
|-|-| | Critical | 5 | | Error | 4 |