Updates from: 05/06/2022 01:08:06
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-angular-spa-app.md
export const b2cPolicies = {
export const msalConfig: Configuration = { auth: { clientId: '<your-MyApp-application-ID>',
- authority: b2cPolicies.authorities.signUpSignIn,
+ authority: b2cPolicies.authorities.signUpSignIn.authority,
knownAuthorities: [b2cPolicies.authorityDomain], redirectUri: '/', },
active-directory Active Directory Certificate Based Authentication Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/active-directory-certificate-based-authentication-get-started.md
Previously updated : 02/10/2022 Last updated : 05/04/2022
The EAS profile must contain the following information:
- The EAS endpoint (for example, outlook.office365.com)
-An EAS profile can be configured and placed on the device through the utilization of Mobile device management (MDM) such as Intune or by manually placing the certificate in the EAS profile on the device.
+An EAS profile can be configured and placed on the device through the utilization of Mobile device management (MDM) such as Microsoft Endpoint Manager or by manually placing the certificate in the EAS profile on the device.
### Testing EAS client applications on Android
active-directory Active Directory Certificate Based Authentication Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/active-directory-certificate-based-authentication-ios.md
Previously updated : 02/16/2022 Last updated : 05/04/2022
Using certificates eliminates the need to enter a username and password combinat
| Apps | Support | | | | | Azure Information Protection app |![Check mark signifying support for this application][1] |
-| Intune Company Portal |![Check mark signifying support for this application][1] |
+| Company Portal |![Check mark signifying support for this application][1] |
| Microsoft Teams |![Check mark signifying support for this application][1] | | Office (mobile) |![Check mark signifying support for this application][1] | | OneNote |![Check mark signifying support for this application][1] |
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
Title: Combined password policy and weak password check in Azure Active Directory
-description: Learn about the combined password policy and weak password check in Azure Active Directory
+ Title: Combined password policy and check for weak passwords in Azure Active Directory
+description: Learn about the combined password policy and check for weak passwords in Azure Active Directory
Previously updated : 10/14/2021 Last updated : 05/04/2022
-# Combined password policy and weak password check in Azure Active Directory
+# Combined password policy and check for weak passwords in Azure Active Directory
Beginning in October 2021, Azure Active Directory (Azure AD) validation for compliance with password policies also includes a check for [known weak passwords](concept-password-ban-bad.md) and their variants. As the combined check for password policy and banned passwords gets rolled out to tenants, Azure AD and Office 365 admin center users may see differences when they create, change, or reset their passwords. This topic explains details about the password policy criteria checked by Azure AD.
As the combined check for password policy and banned passwords gets rolled out t
A password policy is applied to all user and admin accounts that are created and managed directly in Azure AD. You can [ban weak passwords](concept-password-ban-bad.md) and define parameters to [lock out an account](howto-password-smart-lockout.md) after repeated bad password attempts. Other password policy settings can't be modified.
-The Azure AD password policy doesn't apply to user accounts synchronized from an on-premises AD DS environment using Azure AD Connect, unless you enable EnforceCloudPasswordPolicyForPasswordSyncedUsers.
+The Azure AD password policy doesn't apply to user accounts synchronized from an on-premises AD DS environment using Azure AD Connect unless you enable EnforceCloudPasswordPolicyForPasswordSyncedUsers.
-The following Azure AD password policy requirements apply for all passwords that are created, changed, or reset in Azure AD. Requirements are applied during user provisioning, password change, and password reset flows. Unless noted, you can't change these settings.
+The following Azure AD password policy requirements apply for all passwords that are created, changed, or reset in Azure AD. Requirements are applied during user provisioning, password change, and password reset flows. You can't change these settings except as noted.
| Property | Requirements | | | | | Characters allowed |Uppercase characters (A - Z)<br>Lowercase characters (a - z)<br>Numbers (0 - 9)<br>Symbols:<br>- @ # $ % ^ & * - _ ! + = [ ] { } &#124; \ : ' , . ? / \` ~ " ( ) ; < ><br>- blank space | | Characters not allowed | Unicode characters |
-| Password length |Passwords require<br>- A minimum of 8 characters<br>- A maximum of 256 characters</li> |
-| Password complexity |Passwords require three out of four of the following:<br>- Uppercase characters<br>- Lowercase characters<br>- Numbers <br>- Symbols<br> Note: Password complexity check is not required for Education tenants. |
-| Password not recently used | When a user changes or resets their password, the new password cannot be the same as the current or recently used passwords. |
-| Password is not banned by [Azure AD Password Protection](concept-password-ban-bad.md) | The password can't be on the global list of banned passwords for Azure AD Password Protection, or on the customizable list of banned passwords specific to your organization. |
+| Password length |Passwords require<br>- A minimum of eight characters<br>- A maximum of 256 characters</li> |
+| Password complexity |Passwords require three out of four of the following categories:<br>- Uppercase characters<br>- Lowercase characters<br>- Numbers <br>- Symbols<br> Note: Password complexity check isn't required for Education tenants. |
+| Password not recently used | When a user changes or resets their password, the new password can't be the same as the current or recently used passwords. |
+| Password isn't banned by [Azure AD Password Protection](concept-password-ban-bad.md) | The password can't be on the global list of banned passwords for Azure AD Password Protection, or on the customizable list of banned passwords specific to your organization. |
## Password expiration policies
-Password expiration policies are unchanged but they are included in this topic for completeness. A *global administrator* or *user administrator* can use the [Microsoft Azure AD Module for Windows PowerShell](/powershell/module/Azuread/) to set user passwords not to expire.
+Password expiration policies are unchanged but they're included in this topic for completeness. A *global administrator* or *user administrator* can use the [Microsoft Azure AD Module for Windows PowerShell](/powershell/module/Azuread/) to set user passwords not to expire.
> [!NOTE] > By default, only passwords for user accounts that aren't synchronized through Azure AD Connect can be configured to not expire. For more information about directory synchronization, see [Connect AD with Azure AD](../hybrid/how-to-connect-password-hash-synchronization.md#password-expiration-policy). You can also use PowerShell to remove the never-expires configuration, or to see user passwords that are set to never expire.
-The following expiration requirements apply to other providers that use Azure AD for identity and directory services, such as Intune and Microsoft 365.
+The following expiration requirements apply to other providers that use Azure AD for identity and directory services, such as Microsoft Endpoint Manager and Microsoft 365.
| Property | Requirements | | | |
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
Previously updated : 07/13/2021 Last updated : 05/04/2022
Organizations that rely on a single access control, such as multi-factor authent
This document provides guidance on strategies an organization should adopt to provide resilience to reduce the risk of lockout during unforeseen disruptions with the following scenarios:
- 1. Organizations can increase their resiliency to reduce the risk of lockout **before a disruption** by implementing mitigation strategies or contingency plans.
- 2. Organizations can continue to access apps and resources they choose **during a disruption** by having mitigation strategies and contingency plans in place.
- 3. Organizations should make sure they preserve information, such as logs, **after a disruption** and before they roll back any contingencies they implemented.
- 4. Organizations that havenΓÇÖt implemented prevention strategies or alternative plans may be able to implement **emergency options** to deal with the disruption.
+ - Organizations can increase their resiliency to reduce the risk of lockout **before a disruption** by implementing mitigation strategies or contingency plans.
+ - Organizations can continue to access apps and resources they choose **during a disruption** by having mitigation strategies and contingency plans in place.
+ - Organizations should make sure they preserve information, such as logs, **after a disruption** and before they roll back any contingencies they implemented.
+ - Organizations that havenΓÇÖt implemented prevention strategies or alternative plans may be able to implement **emergency options** to deal with the disruption.
## Key guidance
To unlock admin access to your tenant, you should create emergency access accoun
Incorporate the following access controls in your existing Conditional Access policies for organization:
-1. Provision multiple authentication methods for each user that rely on different communication channels, for example the Microsoft Authenticator app (internet-based), OATH token (generated on-device), and SMS (telephonic). The following PowerShell script will help you identify in advance, which additional methods your users should register: [Script for Azure AD MFA authentication method analysis](/samples/azure-samples/azure-mfa-authentication-method-analysis/azure-mfa-authentication-method-analysis/).
-2. Deploy Windows Hello for Business on Windows 10 devices to satisfy MFA requirements directly from device sign-in.
-3. Use trusted devices via [Azure AD Hybrid Join](../devices/overview.md) or [Microsoft Intune Managed devices](/intune/planning-guide). Trusted devices will improve user experience because the trusted device itself can satisfy the strong authentication requirements of policy without an MFA challenge to the user. MFA will then be required when enrolling a new device and when accessing apps or resources from untrusted devices.
-4. Use Azure AD identity protection risk-based policies that prevent access when the user or sign-in is at risk in place of fixed MFA policies.
-5. If you are protecting VPN access using Azure AD MFA NPS extension, consider federating your VPN solution as a [SAML app](../manage-apps/view-applications-portal.md) and determine the app category as recommended below.
+- Provision multiple authentication methods for each user that rely on different communication channels, for example the Microsoft Authenticator app (internet-based), OATH token (generated on-device), and SMS (telephonic). The following PowerShell script will help you identify in advance, which additional methods your users should register: [Script for Azure AD MFA authentication method analysis](/samples/azure-samples/azure-mfa-authentication-method-analysis/azure-mfa-authentication-method-analysis/).
+- Deploy Windows Hello for Business on Windows 10 devices to satisfy MFA requirements directly from device sign-in.
+- Use trusted devices via [Azure AD Hybrid Join](../devices/overview.md) or [Microsoft Endpoint Manager](/intune/planning-guide). Trusted devices will improve user experience because the trusted device itself can satisfy the strong authentication requirements of policy without an MFA challenge to the user. MFA will then be required when enrolling a new device and when accessing apps or resources from untrusted devices.
+- Use Azure AD identity protection risk-based policies that prevent access when the user or sign-in is at risk in place of fixed MFA policies.
+- If you are protecting VPN access using Azure AD MFA NPS extension, consider federating your VPN solution as a [SAML app](../manage-apps/view-applications-portal.md) and determine the app category as recommended below.
>[!NOTE] > Risk-based policies require [Azure AD Premium P2](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) licenses.
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
Previously updated : 06/25/2021 Last updated : 05/04/2022
active-directory How To Authentication Sms Supported Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-sms-supported-apps.md
SMS-based authentication is available to Microsoft apps integrated with the Micr
| Office 365- Microsoft Online Services* | ΓùÅ | | | Microsoft One Note | ΓùÅ | | | Microsoft Teams | ΓùÅ | ΓùÅ |
-| Microsoft Intune Company portal | ΓùÅ | ΓùÅ |
+| Company portal | ΓùÅ | ΓùÅ |
| My Apps Portal | ΓùÅ |Not available| | Microsoft Forms | ΓùÅ |Not available| | Microsoft Edge | ΓùÅ | |
The above mentioned Microsoft apps support SMS sign-in is because they use the M
## Unsupported Microsoft apps Microsoft 365 desktop (Windows or Mac) apps and Microsoft 365 web apps (except MS One Note) that are accessed directly on the web don't support SMS sign-in. These apps use the Microsoft Office login (`https://office.live.com/start/*`) that requires a password to sign in.
-For the same reason, Microsoft Office mobile apps (except Microsoft Teams, Intune Company Portal, and Microsoft Azure) don't support SMS sign-in.
+For the same reason, Microsoft Office mobile apps (except Microsoft Teams, Company Portal, and Microsoft Azure) don't support SMS sign-in.
| Unsupported Microsoft apps| Examples | | | | | Native desktop Microsoft apps | Microsoft Teams, O365 apps, Word, Excel, etc.|
-| Native mobile Microsoft apps (except Microsoft Teams, Intune Company Portal, and Microsoft Azure) | Outlook, Edge, Power BI, Stream, SharePoint, Power Apps, Word, etc.|
+| Native mobile Microsoft apps (except Microsoft Teams, Company Portal, and Microsoft Azure) | Outlook, Edge, Power BI, Stream, SharePoint, Power Apps, Word, etc.|
| Microsoft 365 web apps (accessed directly on web) | [Outlook](https://outlook.live.com/owa/), [Word](https://office.live.com/start/Word.aspx), [Excel](https://office.live.com/start/Excel.aspx), [PowerPoint](https://office.live.com/start/PowerPoint.aspx), [OneDrive](https://onedrive.live.com/about/signin)| ## Support for Non-Microsoft apps
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Previously updated : 03/22/2022 Last updated : 05/05/2022
In addition to choosing who can be nudged, you can define how many days a user c
![Installation complete](./media/how-to-nudge-authenticator-app/finish.png)
-1. If a user wishes to not install the Authenticator app, they can tap **Not now** to snooze the prompt for a number of days, which can be defined by an admin.
+1. If a user wishes to not install Microsoft Authenticator, they can tap **Not now** to snooze the prompt for up to 14 days, which can be set by an admin.
![Snooze installation](./media/how-to-nudge-authenticator-app/snooze.png)
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
Previously updated : 05/28/2021 Last updated : 05/04/2022
There are three types of passwordless sign-in deployments available with securit
Enabling Windows 10 sign-in using FIDO2 security keys requires you to enable the credential provider functionality in Windows 10. Choose one of the following:
-* [Enable credential provider with Intune](howto-authentication-passwordless-security-key-windows.md)
+* [Enable credential provider with Microsoft Endpoint Manager](howto-authentication-passwordless-security-key-windows.md)
- * We recommend Intune deployment.
+ * We recommend Microsoft Endpoint Manager deployment.
* [Enable credential provider with a provisioning package](howto-authentication-passwordless-security-key-windows.md)
- * If Intune deployment isn't possible, administrators must deploy a package on each machine to enable the credential provider functionality. The package installation can be carried out by one of the following options:
+ * If Microsoft Endpoint Manager deployment isn't possible, administrators must deploy a package on each machine to enable the credential provider functionality. The package installation can be carried out by one of the following options:
* Group Policy or Configuration Manager * Local installation on a Windows 10 machine
active-directory Howto Authentication Passwordless Security Key Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-windows.md
Previously updated : 04/20/2022 Last updated : 05/04/2022
To target specific device groups to enable the credential provider, use the foll
### Enable with a provisioning package
-For devices not managed by Intune, a provisioning package can be installed to enable the functionality. The Windows Configuration Designer app can be installed from the [Microsoft Store](https://www.microsoft.com/p/windows-configuration-designer/9nblggh4tx22). Complete the following steps to create a provisioning package:
+For devices not managed by Microsoft Endpoint Manager, a provisioning package can be installed to enable the functionality. The Windows Configuration Designer app can be installed from the [Microsoft Store](https://www.microsoft.com/p/windows-configuration-designer/9nblggh4tx22). Complete the following steps to create a provisioning package:
1. Launch the Windows Configuration Designer. 1. Select **File** > **New project**.
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
Previously updated : 02/02/2022 Last updated : 05/05/2022
To enable your support team's success, you can create a FAQ based on questions y
| User isn't receiving a text or call on their office or cell phone| A user is trying to verify their identity via text or call but isn't receiving a text/call. | | User can't access the password reset portal| A user wants to reset their password but isn't enabled for password reset and can't access the page to update passwords. | | User can't set a new password| A user completes verification during the password reset flow but can't set a new password. |
-| User doesn't see a Reset Password link on a Windows 10 device| A user is trying to reset password from the Windows 10 lock screen, but the device is either not joined to Azure AD, or the Intune device policy isn't enabled |
+| User doesn't see a Reset Password link on a Windows 10 device| A user is trying to reset password from the Windows 10 lock screen, but the device is either not joined to Azure AD, or the Microsoft Endpoint Manager device policy isn't enabled |
### Plan rollback
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
To configure a Windows 10 device for SSPR at the sign-in screen, review the foll
- Azure AD joined - Hybrid Azure AD joined
-### Enable for Windows 10 using Intune
+### Enable for Windows 10 using Microsoft Endpoint Manager
-Deploying the configuration change to enable SSPR from the login screen using Intune is the most flexible method. Intune allows you to deploy the configuration change to a specific group of machines you define. This method requires Intune enrollment of the device.
+Deploying the configuration change to enable SSPR from the login screen using Microsoft Endpoint Manager is the most flexible method. Microsoft Endpoint Manager allows you to deploy the configuration change to a specific group of machines you define. This method requires Microsoft Endpoint Manager enrollment of the device.
-#### Create a device configuration policy in Intune
+#### Create a device configuration policy in Microsoft Endpoint Manager
-1. Sign in to the [Azure portal](https://portal.azure.com) and select **Intune**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **Endpoint Manager**.
1. Create a new device configuration profile by going to **Device configuration** > **Profiles**, then select **+ Create Profile** - For **Platform** choose *Windows 10 and later* - For **Profile type**, choose *Custom*
Deploying the configuration change to enable SSPR from the login screen using In
Select **Add**, then **Next**. 1. The policy can be assigned to specific users, devices, or groups. Assign the profile as desired for your environment, ideally to a test group of devices first, then select **Next**.
- For more information, see [Assign user and device profiles in Microsoft Intune](/mem/intune/configuration/device-profile-assign).
+ For more information, see [Assign user and device profiles in Microsoft Microsoft Endpoint Manager](/mem/intune/configuration/device-profile-assign).
1. Configure applicability rules as desired for your environment, such as to *Assign profile if OS edition is Windows 10 Enterprise*, then select **Next**. 1. Review your profile, then select **Create**.
active-directory Authentication Flows App Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-flows-app-scenarios.md
Previously updated : 03/03/2020 Last updated : 05/05/2022 #Customer intent: As an app developer, I want to learn about authentication flows and application scenarios so I can create applications protected by the Microsoft identity platform.
To help protect a web app that signs in a user:
- If you develop in .NET, you use ASP.NET or ASP.NET Core with the ASP.NET OpenID Connect middleware. Protecting a resource involves validating the security token, which is done by the [IdentityModel extensions for .NET](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/wiki) and not MSAL libraries. -- If you develop in Node.js, you use [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) or [Passport.js](https://github.com/AzureAD/passport-azure-ad).
+- If you develop in Node.js, you use [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node).
For more information, see [Web app that signs in users](scenario-web-app-sign-user-overview.md).
Microsoft Authentication Libraries support multiple platforms:
You can also use various languages to build your applications.
-> [!NOTE]
-> Some application types aren't available on every platform.
- In the Windows column of the following table, each time .NET Core is mentioned, .NET Framework is also possible. The latter is omitted to avoid cluttering the table. |Scenario | Windows | Linux | Mac | iOS | Android
For more information, see [Microsoft identity platform authentication libraries]
## Next steps
-* Learn more about [authentication basics](./authentication-vs-authorization.md) and [access tokens in the Microsoft identity platform](access-tokens.md).
-* Learn more about [securing access to IoT apps](/azure/architecture/example-scenario/iot-aad/iot-aad).
+For more information about authentication, see:
+
+- [Authentication vs. authorization.](./authentication-vs-authorization.md)
+- [Microsoft identity platform access tokens.](access-tokens.md)
+- [Securing access to IoT apps.](/azure/architecture/example-scenario/iot-aad/iot-aad#security)
active-directory Quickstart Configure App Access Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md
Previously updated : 09/03/2020 Last updated : 05/05/2022
This diagram shows how the two app registrations relate to one another. In this
Once you've registered both your client app and web API and you've exposed the API by creating scopes, you can configure the client's permissions to the API by following these steps: 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration.
1. Select **Azure Active Directory** > **App registrations**, and then select your client application (*not* your web API). 1. Select **API permissions** > **Add a permission** > **My APIs**. 1. Select the web API you registered as part of the prerequisites.
In addition to accessing your own web API on behalf of the signed-in user, your
Configure delegated permission to Microsoft Graph to enable your client application to perform operations on behalf of the logged-in user, for example reading their email or modifying their profile. By default, users of your client app are asked when they sign in to consent to the delegated permissions you've configured for it. 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration.
1. Select **Azure Active Directory** > **App registrations**, and then select your client application. 1. Select **API permissions** > **Add a permission** > **Microsoft Graph** 1. Select **Delegated permissions**. Microsoft Graph exposes many permissions, with the most commonly used shown at the top of the list.
Configure application permissions for an application that needs to authenticate
In the following steps, you grant permission to Microsoft Graph's *Files.Read.All* permission as an example. 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/quickstart-configure-app-access-web-apis/portal-01-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant containing your client app's registration.
1. Select **Azure Active Directory** > **App registrations**, and then select your client application. 1. Select **API permissions** > **Add a permission** > **Microsoft Graph** > **Application permissions**. 1. All permissions exposed by Microsoft Graph are shown under **Select permissions**.
The **Grant admin consent** button is *disabled* if you aren't an admin or if no
## Next steps
-Advance to the next quickstart in the series to learn how to configure which account types can access your application. For example, you might want to limit access only to those users in your organization (single-tenant) or allow users in other Azure AD tenants (multi-tenant) and those with personal Microsoft accounts (MSA).
+Advance to the next quickstart in the series to learn how to configure which account types can access your application. For example, you might want to limit access only to those users in your organization (single-tenant) or allow users in other Azure Active Directory (Azure AD) tenants (multi-tenant) and those with personal Microsoft accounts (MSA).
> [!div class="nextstepaction"] > [Modify the accounts supported by an application](./howto-modify-supported-accounts.md)
active-directory Tutorial V2 React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-react.md
Previously updated : 04/16/2021 Last updated : 05/05/2022
MSAL React supports the authorization code flow in the browser instead of the im
:::image type="content" source="media/tutorial-v2-javascript-auth-code/diagram-01-auth-code-flow.png" alt-text="Diagram showing the authorization code flow in a single-page application":::
-The application you create in this tutorial enables a React SPA to query the Microsoft Graph API by acquiring security tokens from the the Microsoft identity platform. It uses the Microsoft Authentication Library (MSAL) for React, a wrapper of the MSAL.js v2 library. MSAL React enables React 16+ applications to authenticate enterprise users by using Azure Active Directory (Azure AD), and also users with Microsoft accounts and social identities like Facebook, Google, and LinkedIn. The library also enables applications to get access to Microsoft cloud services and Microsoft Graph.
+The application you create in this tutorial enables a React SPA to query the Microsoft Graph API by acquiring security tokens from the Microsoft identity platform. It uses the MSAL for React, a wrapper of the MSAL.js v2 library. MSAL React enables React 16+ applications to authenticate enterprise users by using Azure Active Directory (Azure AD), and also users with Microsoft accounts and social identities like Facebook, Google, and LinkedIn. The library also enables applications to get access to Microsoft cloud services and Microsoft Graph.
-In this scenario, after a user signs in, an access token is requested and added to HTTP requests in the authorization header. Token acquisition and renewal are handled by the Microsoft Authentication Library for React (MSAL React).
+In this scenario, after a user signs in, an access token is requested and added to HTTP requests in the authorization header. Token acquisition and renewal are handled by the MSAL for React (MSAL React).
### Libraries
Prefer to download this tutorial's completed sample project instead? To run the
Then, to configure the code sample before you execute it, skip to the [configuration step](#register-your-application).
-To continue with the tutorial and build the application yourself, move on to the next section, [Prerequisites](#prerequisites).
+To continue with the tutorial and build the application yourself, move on to the next section, [Create your project](#create-your-project).
## Create your project
npm install @azure/msal-browser @azure/msal-react # Install the MSAL packages
npm install react-bootstrap bootstrap # Install Bootstrap for styling ```
-You have now bootstrapped a small React project using [Create React App](https://create-react-app.dev/docs/getting-started). This will be the starting point the rest of this tutorial will build on. If you would like to see the changes to your app as you are working through this tutorial you can run the following command:
+You've now bootstrapped a small React project using [Create React App](https://create-react-app.dev/docs/getting-started). This will be the starting point the rest of this tutorial will build on. If you'd like to see the changes to your app as you're working through this tutorial you can run the following command:
```console npm start ```
-A browser window should be opened to your app automatically. If it does not, open your browser and navigate to http://localhost:3000. Each time you save a file with updated code the page will reload to reflect the changes.
+A browser window should be opened to your app automatically. If it doesn't, open your browser and navigate to http://localhost:3000. Each time you save a file with updated code the page will reload to reflect the changes.
## Register your application
export const SignInButton = () => {
export default App; ```
-Your app now has a sign-in button which is only displayed for unauthenticated users!
+Your app now has a sign-in button, which is only displayed for unauthenticated users!
When a user selects the **Sign in using Popup** or **Sign in using Redirect** button for the first time, the `onClick` handler calls `loginPopup` (or `loginRedirect`) to sign in the user. The `loginPopup` method opens a pop-up window with the *Microsoft identity platform endpoint* to prompt and validate the user's credentials. After a successful sign-in, *msal.js* initiates the [authorization code flow](v2-oauth2-auth-code-flow.md).
In order to render certain components only for authenticated or unauthenticated
}; ```
-1. Update your imports in *src/App.js* to match the following:
+1. Update your imports in *src/App.js* to match the following snippet:
```js import React, { useState } from "react";
After a user signs in, your app shouldn't ask users to reauthenticate every time
Calling `acquireTokenPopup` opens a pop-up window (or `acquireTokenRedirect` redirects users to the Microsoft identity platform). In that window, users need to interact by confirming their credentials, giving consent to the required resource, or completing the two-factor authentication.
-> [!NOTE]
-> If you're using Internet Explorer, we recommend that you use the `loginRedirect` and `acquireTokenRedirect` methods due to a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups) with Internet Explorer and pop-up windows.
+If you're using Internet Explorer, we recommend that you use the `loginRedirect` and `acquireTokenRedirect` methods due to a [known issue](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/internet-explorer.md#popups) with Internet Explorer and pop-up windows.
## Call the Microsoft Graph API
Calling `acquireTokenPopup` opens a pop-up window (or `acquireTokenRedirect` red
}; ```
-1. Next, open *src/App.js* and add these to the imports:
+1. Next, open *src/App.js* and add the following imports:
```javascript import { ProfileData } from "./components/ProfileData";
You've completed creation of the application and are now ready to launch the web
```console npm start ```
-1. A browser window should be opened to your app automatically. If it does not, open your browser and navigate to `http://localhost:3000`. You should see a page that looks like the one below.
+1. A browser window should be opened to your app automatically. If it doesn't, open your browser and navigate to `http://localhost:3000`. You should see a page that looks like the one below.
:::image type="content" source="media/tutorial-v2-react/react-01-unauthenticated.png" alt-text="Web browser displaying sign-in dialog":::
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
Previously updated : 11/23/2021 Last updated : 05/04/2022
# Restrict guest access permissions in Azure Active Directory
-Azure Active Directory (Azure AD) allows you to restrict what external guest users can see in their organization in Azure AD. Guest users are set to a limited permission level by default in Azure AD, while the default for member users is the full set of user permissions. This is a new guest user permission level in your Azure AD organization's external collaboration settings for even more restricted access, so your guest access levels are:
+Azure Active Directory (Azure AD) allows you to restrict what external guest users can see in their organization in Azure AD. Guest users are set to a limited permission level by default in Azure AD, while the default for member users is the full set of user permissions. There's another guest user permission level in your Azure AD organization's external collaboration settings for even more restricted access, so that the guest access levels are:
-Permission level | Access level | Value
-- | | --
-Same as member users | Guests have the same access to Azure AD resources as member users | a0b1b346-4d3e-4e8b-98f8-753987be4970
-Limited access (default) | Guests can see membership of all non-hidden groups | 10dae51f-b6af-4016-8d66-8c2a99b929b3
+Permission level | Access level | Value
+- | | --
+Same as member users | Guests have the same access to Azure AD resources as member users | a0b1b346-4d3e-4e8b-98f8-753987be4970
+Limited access (default) | Guests can see membership of all non-hidden groups | 10dae51f-b6af-4016-8d66-8c2a99b929b3
**Restricted access (new)** | **Guests can't see membership of any groups** | **2af84b1e-32c8-42b7-82bc-daa82404023b** When guest access is restricted, guests can view only their own user profile. Permission to view other users isn't allowed even if the guest is searching by User Principal Name or objectId. Restricted access also restricts guest users from seeing the membership of groups they're in. For more information about the overall default user permissions, including guest user permissions, see [What are the default user permissions in Azure Active Directory?](../fundamentals/users-default-permissions.md).
PS C:\WINDOWS\system32> Set-AzureADMSAuthorizationPolicy -GuestUserRoleId '2af84
### Supported services
-By supported we mean that the experience is as expected; specifically, that it is same as current guest experience.
+By supported we mean that the experience is as expected; specifically, that it's same as current guest experience.
- Teams - Outlook (OWA)
Service without current support might have compatibility issues with the new gue
Question | Answer -- |
-Where do these permissions apply? | These directory level permissions are enforced across Azure AD services and portals including the Microsoft Graph, PowerShell v2, the Azure portal, and My Apps portal. Microsoft 365 services leveraging Microsoft 365 groups for collaboration scenarios are also affected, specifically Outlook, Microsoft Teams, and SharePoint.
-How do restricted permissions affect which groups guests can see? | Regardless of default or restricted guest permissions, guests can't enumerate the list of groups or users. Guests can see groups they are members of in both the Azure portal and the My Apps portal depending on permissions:<li>**Default permissions**: To find the groups they are members of in the Azure portal, the guest must search for their object ID in the **All users** list, and then select **Groups**. Here they can see the list of groups that they are members of, including all the group details, including name, email, and so on. In the My Apps portal, they can see a list of groups they own and groups they are a member of.</li><li>**Restricted guest permissions**: In the Azure portal, they can still find the list of groups they are members of by searching for their object ID in the All users list, and then select Groups. They can only see very limited details about the group, notably the object ID. By design, the Name and Email columns are blank and Group Type is Unrecognized. In the My Apps portal, they are not able to access the list of groups they own or groups they are a member of.</li><br>For more detailed comparison of the directory permissions that come from the Graph API, see [Default user permissions](../fundamentals/users-default-permissions.md#member-and-guest-users).
-Which parts of the My Apps portal will this feature affect? | The groups functionality in the My Apps portal will honor these new permissions. This includes all paths to view the groups list and group memberships in My Apps. No changes were made to the group tile availability. The group tile availability is still controlled by the existing group setting in the Azure portal.
-Do these permissions override SharePoint or Microsoft Teams guest settings? | No. Those existing settings still control the experience and access in those applications. For example, if you see issues in SharePoint, double check your external sharing settings.
+Where do these permissions apply? | These directory level permissions are enforced across Azure AD services including the Microsoft Graph, PowerShell v2, the Azure portal, and My Apps portal. Microsoft 365 services leveraging Microsoft 365 groups for collaboration scenarios are also affected, specifically Outlook, Microsoft Teams, and SharePoint.
+How do restricted permissions affect which groups guests can see? | Regardless of default or restricted guest permissions, guests can't enumerate the list of groups or users. Guests can see groups they're members of in both the Azure portal and the My Apps portal depending on permissions:<ul><li>**Default permissions**: To find the groups they're members of in the Azure portal, the guest must search for their object ID in the **All users** list, and then select **Groups**. Here they can see the list of groups that they're members of, including all the group details, including name, email, and so on. In the My Apps portal, they can see a list of groups they own and groups they're in.</li><li>**Restricted guest permissions**: In the Azure portal, they can find the list of groups they're in by searching for their object ID in the **All users** list, and then selecting **Groups**. They can see only limited details about the group, notably the object ID. By design, the Name and Email columns are blank and Group Type is Unrecognized. In the My Apps portal, they're not able to access the list of groups they own or groups they're a member of.</li></ul><br>For more detailed comparison of the directory permissions that come from the Graph API, see [Default user permissions](../fundamentals/users-default-permissions.md#member-and-guest-users).
+Which parts of the My Apps portal will this feature affect? | The groups functionality in the My Apps portal will honor these new permissions. This functionality includes all paths to view the groups list and group memberships in My Apps. No changes were made to the group tile availability. The group tile availability is still controlled by the existing group setting in the Azure portal.
+Do these permissions override SharePoint or Microsoft Teams guest settings? | No. Those existing settings still control the experience and access in those applications. For example, if you see issues in SharePoint, double check your external sharing settings. Guests added by team owners at the team level have access to channel meeting chat only for standard channels, excluding any private and shared channels.
What are the known compatibility issues in Yammer? | With permissions set to ΓÇÿrestrictedΓÇÖ, guests signed into Yammer won't be able to leave the group.
-Will my existing guest permissions be changed in my tenant? | No changes were made to your current settings. We maintain backward compatibility with your existing settings. You decide when you want make changes.
+Will my existing guest permissions be changed in my tenant? | No changes were made to your current settings. We maintain backward compatibility with your existing settings. You decide when you want to make changes.
Will these permissions be set by default? | No. The existing default permissions remain unchanged. You can optionally set the permissions to be more restrictive. Are there any license requirements for this feature? | No, there are no new licensing requirements with this feature.
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
Previously updated : 04/26/2022 Last updated : 05/05/2022 -
For B2B collaboration with other Azure AD organizations, you should also review
- **Guest users have the same access as members (most inclusive)**: This option gives guests the same access to Azure AD resources and directory data as member users.
- - **Guest users have limited access to properties and memberships of directory objects**: (Default) This setting blocks guests from certain directory tasks, like enumerating users, groups, or other directory resources. Guests can see membership of all non-hidden groups.
+ - **Guest users have limited access to properties and memberships of directory objects**: (Default) This setting blocks guests from certain directory tasks, like enumerating users, groups, or other directory resources. Guests can see membership of all non-hidden groups. [Learn more about default guest permissions](../fundamentals/users-default-permissions.md#member-and-guest-users).
- **Guest user access is restricted to properties and memberships of their own directory objects (most restrictive)**: With this setting, guests can access only their own profiles. Guests are not allowed to see other users' profiles, groups, or group memberships.
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
The previous versions of Azure AD Connect shipped with a SQL Server 2012 LocalDB
### MSAL authentication library
-The previous versions of Azure AD Connect shipped with the ADAL authentication library. This library will be deprecated in June 2022. The V2 release ships with the newer MSAL library. For more information, see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
+The previous versions of Azure AD Connect shipped with the ADAL authentication library. This library will be deprecated after December 2022. The V2 release ships with the newer MSAL library. For more information, see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
### Visual C++ Redist 14
active-directory Reference Basic Info Sign In Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-basic-info-sign-in-logs.md
na Previously updated : 05/02/2022 Last updated : 05/05/2022
In Azure AD, a resource access has three relevant components:
- **What** ΓÇô The target (Resource) accessed by the identity.
-Each component has an associated unique identifier (ID). Below is an example of user using the Windows Azure classic deployment model to access the Azure portal.
+Each component has an associated unique identifier (ID). Below is an example of user using the Microsoft Azure classic deployment model to access the Azure portal.
![Open audit logs](./media/reference-basic-info-sign-in-logs/sign-in-details-basic-info.png)
The sign-in log tracks two tenant identifiers:
- **Resource tenant** ΓÇô The tenant that owns the (target) resource. These identifiers are relevant in cross-tenant scenarios. For example, to find out how users outside your tenant are accessing your resources, select all entries where the home tenant doesnΓÇÖt match the resource tenant.
+For the home tenant, Azure AD tracks the ID and the name.
### Request ID
active-directory Cyberark Saml Authentication Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cyberark-saml-authentication-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure CyberArk SAML Authentication SSO
-To configure single sign-on on **CyberArk SAML Authentication** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [CyberArk SAML Authentication support team](mailto:bizdevtech@cyberark.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **CyberArk SAML Authentication** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [CyberArk SAML Authentication support team](mailto:support@cyberark.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create CyberArk SAML Authentication test user
-In this section, you create a user called B.Simon in CyberArk SAML Authentication. Work with [CyberArk SAML Authentication support team](mailto:bizdevtech@cyberark.com) to add the users in the CyberArk SAML Authentication platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in CyberArk SAML Authentication. Work with [CyberArk SAML Authentication support team](mailto:support@cyberark.com) to add the users in the CyberArk SAML Authentication platform. Users must be created and activated before you use single sign-on.
## Test SSO
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
For example, assuming a new file has been created with the base64 encoded string
az aks update -n $clusterName -g $resourceGroup --http-proxy-config aks-proxy-config-2.json ```
-## Monitoring Addon Configurations
+## Monitoring add-on configuration
-Below list the supported and not supported configuration for the monitoring addon.
-
-Supported configuration(s)
+When using the HTTP proxy with the Monitoring add-on, the following configurations are supported:
- Outbound proxy without authentication - Outbound proxy with username & password authentication - Outbound proxy with trusted cert for Log Analytics endpoint
-Not supported configuration(s)
+The following configurations are not supported:
- - Custom Metrics and Recommended alerts feature are not supported in Proxy with trusted cert
- - Outbound proxy support with Azure Monitor Private Link Scope (AMPLS)
+ - The Custom Metrics and Recommended Alerts features are not supported when using proxy with trusted cert
+ - Outbound proxy is not supported with Azure Monitor Private Link Scope (AMPLS)
## Next steps - For more on the network requirements of AKS clusters, see [control egress traffic for cluster nodes in AKS][aks-egress].
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
For a pod to use AAD pod-managed identity, the pod needs an *aadpodidbinding* la
To run a sample application using AAD pod-managed identity, create a `demo.yaml` file with the following contents. Replace *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* with the values from the previous steps. Replace *SUBSCRIPTION_ID* with your subscription ID. > [!NOTE]
-> In the previous steps, you created the *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* variables. You can use a command such as `echo` to display the value you set for variables, for example `echo $IDENTITY_NAME`.
+> In the previous steps, you created the *POD_IDENTITY_NAME*, *IDENTITY_CLIENT_ID*, and *IDENTITY_RESOURCE_GROUP* variables. You can use a command such as `echo` to display the value you set for variables, for example `echo $POD_IDENTITY_NAME`.
```yml apiVersion: v1
attestation Policy Version 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-0.md
This article introduces the workings of the attestation service and the policy e
The minimum version of the policy supported by the service is version 1.0. The attestation service flow is as follows: - The platform sends the attestation evidence in the attest call to the attestation service.
attestation Policy Version 1 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-1.md
This article introduces the workings of the attestation service and the policy e
## Policy version 1.1 The attestation flow is as follows: - The platform sends the attestation evidence in the attest call to the attestation service.
attestation Policy Version 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-2.md
This article introduces the workings of the attestation service and the policy e
## Policy Version 1.2 The attestation flow is as follows: - The platform sends the attestation evidence in the attest call to the attestation service.
automation Automation Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md
Title: Azure Automation services overview
-description: This article tells what Azure Automation services are and how to use it to automate the lifecycle of infrastructure and applications.
+ Title: Automation services in Azure - overview
+description: This article tells what are the Automation services in Azure and how to use it to automate the lifecycle of infrastructure and applications.
keywords: azure automation services, automanage, Bicep, Blueprints, Guest Config, Policy, Functions Last updated 03/04/2022
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
The Hybrid Runbook Worker role requires the [Log Analytics agent](../azure-monit
The Hybrid Runbook Worker feature supports the following operating systems:
+* Windows Server 2022 (including Server Core)
* Windows Server 2019 (including Server Core) * Windows Server 2016, version 1709 and 1803 (excluding Server Core) * Windows Server 2012, 2012 R2
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
Azure Automation stores and manages runbooks and then delivers them to one or mo
| Windows | Linux (x64)| |||
-| &#9679; Windows Server 2019 (including Server Core), <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core), and <br> &#9679; Windows Server 2012, 2012 R2ΓÇ»<br><br> | &#9679; Debian GNU/Linux 7 and 8, <br> &#9679; Ubuntu 18.04, and 20.04 LTS, <br> &#9679; SUSE Linux Enterprise Server 15, and 15.1 (SUSE didn't release versions numbered 13 or 14), and <br> &#9679; Red Hat Enterprise Linux Server 7 and 8ΓÇ»|
+| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core), and <br> &#9679; Windows Server 2012, 2012 R2 | &#9679; Debian GNU/Linux 7 and 8 <br> &#9679; Ubuntu 18.04, and 20.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15, and 15.1 (SUSE didn't release versions numbered 13 or 14), and <br> &#9679; Red Hat Enterprise Linux Server 7 and 8ΓÇ»|
### Other Requirements
azure-app-configuration Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview-managed-identity.md
# How to use managed identities for Azure App Configuration
-This topic shows you how to create a managed identity for Azure App Configuration. A managed identity from Azure Active Directory (Azure AD) allows Azure App Configuration to easily access other Azure AD protected resources, such as Azure Key Vault. The identity is managed by the Azure platform. It does not require you to provision or rotate any secrets. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+This topic shows you how to create a managed identity for Azure App Configuration. A managed identity from Azure Active Directory (Azure AD) allows Azure App Configuration to easily access other Azure AD protected resources. The identity is managed by the Azure platform. It does not require you to provision or rotate any secrets. For more about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
Your application can be granted two types of identities:
azure-arc Delete Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-azure-resources.md
Last updated 11/03/2021
-# Delete resources from Azure
+# Delete resources from Azure Arc-enabled data services
This article describes how to delete Azure Arc-enabled data service resources from Azure.
In indirect connect mode, deleting an instance from Kubernetes will not remove i
In some cases, you may need to manually delete Azure Arc-enabled data services resources in Azure. You can delete these resources using any of the following options. -- [Delete resources from Azure](#delete-resources-from-azure)
- - [Delete an entire resource group](#delete-an-entire-resource-group)
- - [Delete specific resources in the resource group](#delete-specific-resources-in-the-resource-group)
- - [Delete resources using the Azure CLI](#delete-resources-using-the-azure-cli)
- - [Delete SQL managed instance resources using the Azure CLI](#delete-sql-managed-instance-resources-using-the-azure-cli)
- - [Delete PostgreSQL Hyperscale server group resources using the Azure CLI](#delete-postgresql-hyperscale-server-group-resources-using-the-azure-cli)
- - [Delete Azure Arc data controller resources using the Azure CLI](#delete-azure-arc-data-controller-resources-using-the-azure-cli)
- - [Delete a resource group using the Azure CLI](#delete-a-resource-group-using-the-azure-cli)
+- [Delete an entire resource group](#delete-an-entire-resource-group)
+- [Delete specific resources in the resource group](#delete-specific-resources-in-the-resource-group)
+- [Delete resources using the Azure CLI](#delete-resources-using-the-azure-cli)
+ - [Delete SQL managed instance resources using the Azure CLI](#delete-sql-managed-instance-resources-using-the-azure-cli)
+ - [Delete PostgreSQL Hyperscale server group resources using the Azure CLI](#delete-postgresql-hyperscale-server-group-resources-using-the-azure-cli)
+ - [Delete Azure Arc data controller resources using the Azure CLI](#delete-azure-arc-data-controller-resources-using-the-azure-cli)
+ - [Delete a resource group using the Azure CLI](#delete-a-resource-group-using-the-azure-cli)
## Delete an entire resource group
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Updated ElasticSearch to latest version `7.9.1-36fefbab37-205465`. Also Grafana
All container image sizes were reduced by approximately 40% on average.
-Introduced new `create-sql-keytab.ps1` PowerShell script to add in creation of keytabs.
+Introduced new `create-sql-keytab.ps1` PowerShell script to aid in creation of keytabs.
### SQL Managed Instance
Add support for `NodeSelector`, `TopologySpreadConstraints` and `Affinity`. Onl
Add support for specifying labels and annotations on the secondary service endpoint. `REQUIRED_SECONDARIES_TO_COMMIT` is now a function of the number of replicas. -- If more than three replicas, then `REQUIRED_SECONDARIES_TO_COMMIT = 1`.
+- If three replicas, then `REQUIRED_SECONDARIES_TO_COMMIT = 1`.
- If one or two replicas, then `REQUIRED_SECONDARIES_TO_COMMIT = 0`. ### User experience improvements
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-| OpenShift 7.13 | 1.20.0 | v1.0.0_2021-07-30 | 15.0.2148.140 | postgres 12.3 (Ubuntu 12.3-1)|
+| OpenShift 4.7.13 | 1.20.0 | v1.0.0_2021-07-30 | 15.0.2148.140 | postgres 12.3 (Ubuntu 12.3-1)|
### VMware
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* A basic understanding of [Kubernetes core concepts](../../aks/concepts-clusters-workloads.md).
-* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0 and <= 2.29.0
+* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0
* Install the **connectedk8s** Azure CLI extension of version >= 1.2.0:
eastus AzureArcTest1 microsoft.kubernetes/connectedclusters
+## Connect a cluster with custom certificate
+
+If you need the outbound communication from Arc agents to authenticate via a certificate, pass the certificate during onboarding. In case you need to pass multiple certificates, combine them into a single certificate chain and pass it through.
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the connect command with parameters specified:
+
+```azurecli
+az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-cert <path-to-cert-file>
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+This scenario is not supported via the powershell cmdlet.
+++ ## Connect using an outbound proxy server If your cluster is behind an outbound proxy server, requests must be routed via the outbound proxy server.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Cisco | [Intersight Kubernetes Service (IKS)](https://www.cisco.com/c/en/us/products/cloud-systems-management/cloud-operations/intersight-kubernetes-service.html) Distribution | Upstream K8s version: 1.19.5 | | Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.21.3 | | Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version 3.5.1 <br> MKE Version 3.4.7 |
-| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
+| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers:
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
Title: Use Azure Private Link to securely connect networks to Azure Arc
+ Title: Use Azure Private Link to securely connect servers to Azure Arc
description: Learn how to use Azure Private Link to securely connect networks to Azure Arc. Previously updated : 02/28/2022 Last updated : 05/04/2022
-# Use Azure Private Link to securely connect networks to Azure Arc
+# Use Azure Private Link to securely connect servers to Azure Arc
[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. This means you can connect your on-premises or multi-cloud servers with Azure Arc and send all traffic over an Azure [ExpressRoute](../../expressroute/expressroute-introduction.md) or site-to-site [VPN connection](../../vpn-gateway/vpn-gateway-about-vpngateways.md) instead of using public networks. Starting with Azure Arc-enabled servers, you can use a Private Link Scope model to allow multiple servers or machines to communicate with their Azure Arc resources using a single private endpoint.
-This article covers when to use and how to set up an Azure Arc Private Link Scope (preview).
-
-> [!NOTE]
-> Azure Arc Private Link Scope (preview) is available in all commercial cloud regions, it is not available in the US Government cloud today.
+This article covers when to use and how to set up an Azure Arc Private Link Scope.
## Advantages
For more information, see [Key Benefits of Private Link](../../private-link/pri
## How it works
-Azure Arc Private Link Scope (preview) connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc-enabled servers. When you enable any one of the Azure Arc-enabled servers supported VM extensions, such as Azure Automation Update Management or Azure Monitor, those resources connect other Azure resources. Such as:
+Azure Arc Private Link Scope connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc-enabled servers. When you enable any one of the Azure Arc-enabled servers supported VM extensions, such as Azure Automation Update Management or Azure Monitor, those resources connect other Azure resources. Such as:
- Log Analytics workspace, required for Azure Automation Update Management, Azure Automation Change Tracking and Inventory, Azure Monitor VM insights, and Azure Monitor log collection with Log Analytics agent. - Azure Automation account, required for Update Management and Change Tracking and Inventory.
For more information about configuring Private Link for the Azure services liste
The Azure Arc-enabled servers Private Link Scope object has a number of limits you should consider when planning your Private Link setup. - You can associate at most one Azure Arc Private Link Scope with a virtual network.- - An Azure Arc-enabled machine or server resource can only connect to one Azure Arc-enabled servers Private Link Scope.- - All on-premises machines need to use the same private endpoint by resolving the correct private endpoint information (FQDN record name and private IP address) using the same DNS forwarder. For more information, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md)- - The Azure Arc-enabled server and Azure Arc Private Link Scope must be in the same Azure region. The Private Endpoint and the virtual network must also be in the same Azure region, but this region can be different from that of your Azure Arc Private Link Scope and Arc-enabled server.--- Traffic to Azure Active Directory and Azure Resource Manager service tags must be allowed through your on-premises network firewall during the preview.-
+- Network traffic to Azure Active Directory and Azure Resource Manager does not traverse the Azure Arc Private Link Scope and will continue to use your default network route to the internet. You can optionally [configure a resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) to send Azure Resource Manager traffic to a private endpoint.
- Other Azure services that you will use, for example Azure Monitor, requires their own private endpoints in your virtual network. -- Azure Arc-enabled servers Private Link Scope is not currently available in Azure US Government regions.- ## Planning your Private Link setup To connect your server to Azure Arc over a private link, you need to configure your network to accomplish the following: 1. Establish a connection between your on-premises network and an Azure virtual network using a [site-to-site VPN](../../vpn-gateway/tutorial-site-to-site-portal.md) or [ExpressRoute circuit](../../expressroute/expressroute-howto-linkvnet-arm.md).
-1. Deploy an Azure Arc Private Link Scope (preview), which controls which machines or servers can communicate with Azure Arc over private endpoints and associate it with your Azure virtual network using a private endpoint.
+1. Deploy an Azure Arc Private Link Scope, which controls which machines or servers can communicate with Azure Arc over private endpoints and associate it with your Azure virtual network using a private endpoint.
1. Update the DNS configuration on your local network to resolve the private endpoint addresses.
-1. Configure your local firewall to allow access to Azure Active Directory and Azure Resource Manager. This is a temporary step and will not be required when private endpoints for these services enter preview.
+1. Configure your local firewall to allow access to Azure Active Directory and Azure Resource Manager.
1. Associate the machines or servers registered with Azure Arc-enabled servers with the private link scope.
See the visual diagram under the section [How it works](#how-it-works) for the n
1. Select **Create**.
-1. Pick a Subscription and Resource Group. During the preview, your virtual network and Azure Arc-enabled servers must be in the same subscription as the Azure Arc Private Link Scope.
+1. Pick a Subscription and Resource Group.
1. Give the Azure Arc Private Link Scope a name. It's best to use a meaningful and clear name.
- You can optionally require every Azure Arc-enabled machine or server associated with this Azure Arc Private Link Scope (preview) to send data to the service through the private endpoint. If you select **Enable public network access**, machines or servers associated with this Azure Arc Private Link Scope (preview) can communicate with the service over both private or public networks. You can change this setting after creating the scope if you change your mind.
+ You can optionally require every Azure Arc-enabled machine or server associated with this Azure Arc Private Link Scope to send data to the service through the private endpoint. If you select **Enable public network access**, machines or servers associated with this Azure Arc Private Link Scope can communicate with the service over both private or public networks. You can change this setting after creating the scope if you change your mind.
1. Select **Review + Create**.
See the visual diagram under the section [How it works](#how-it-works) for the n
## Create a private endpoint
-Once your Azure Arc Private Link Scope (preview) is created, you need to connect it with one or more virtual networks using a private endpoint. The private endpoint exposes access to the Azure Arc services on a private IP in your virtual network address space.
+Once your Azure Arc Private Link Scope is created, you need to connect it with one or more virtual networks using a private endpoint. The private endpoint exposes access to the Azure Arc services on a private IP in your virtual network address space.
1. In your scope resource, select **Private Endpoint connections** in the left-hand resource menu. Select **Add** to start the endpoint create process. You can also approve connections that were started in the Private Link center here by selecting them and selecting **Approve**.
Once your Azure Arc Private Link Scope (preview) is created, you need to connect
## Configure on-premises DNS forwarding
-Your on-premises machines or servers need to be able to resolve the private link DNS records to the private endpoint IP addresses. How you configure this depends on whether youΓÇÖre using Azure private DNS zones to maintain DNS records, or if youΓÇÖre using your own DNS server on-premises and how many servers youΓÇÖre configuring.
+Your on-premises machines or servers need to be able to resolve the private link DNS records to the private endpoint IP addresses. How you configure this depends on whether you're using Azure private DNS zones to maintain DNS records, or if you're using your own DNS server on-premises and how many servers you're configuring.
### DNS configuration using Azure-integrated private DNS zones
If you opted out of using Azure private DNS zones during private endpoint creati
1. Navigate to the private endpoint resource associated with your virtual network and private link scope.
-1. From the left-hand pane, select **DNS configuration** to see a list of the DNS records and corresponding IP addresses youΓÇÖll need to set up on your DNS server. The FQDNs and IP addresses will change based on the region you selected for your private endpoint and the available IP addresses in your subnet.
+1. From the left-hand pane, select **DNS configuration** to see a list of the DNS records and corresponding IP addresses you'll need to set up on your DNS server. The FQDNs and IP addresses will change based on the region you selected for your private endpoint and the available IP addresses in your subnet.
:::image type="content" source="./media/private-link-security/dns-configuration.png" alt-text="DNS configuration details" border="true":::
-1. Follow the guidance from your DNS server vendor to add the necessary DNS zones and A records to match the table in the portal. Ensure that you select a DNS server that is appropriately scoped for your network. Every machine or server that uses this DNS server now resolves the private endpoint IP addresses and must be associated with the Azure Arc Private Link Scope (preview), or the connection will be refused.
+1. Follow the guidance from your DNS server vendor to add the necessary DNS zones and A records to match the table in the portal. Ensure that you select a DNS server that is appropriately scoped for your network. Every machine or server that uses this DNS server now resolves the private endpoint IP addresses and must be associated with the Azure Arc Private Link Scope, or the connection will be refused.
### Single server scenarios
-If youΓÇÖre only planning to use Private Links to support a few machines or servers, you may not want to update your entire networkΓÇÖs DNS configuration. In this case, you can add the private endpoint hostnames and IP addresses to your operating systems **Hosts** file. Depending on the OS configuration, the Hosts file can be the primary or alternative method for resolving hostname to IP address.
+If you're only planning to use Private Links to support a few machines or servers, you may not want to update your entire network's DNS configuration. In this case, you can add the private endpoint hostnames and IP addresses to your operating systems **Hosts** file. Depending on the OS configuration, the Hosts file can be the primary or alternative method for resolving hostname to IP address.
#### Windows
When connecting a machine or server with Azure Arc-enabled servers for the first
1. In the **Resource group** drop-down list, select the resource group the machine will be managed from. 1. In the **Region** drop-down list, select the Azure region to store the machine or server metadata. 1. In the **Operating system** drop-down list, select the operating system that the script is configured to run on.
- 1. Under **Network Connectivity**, select **Private endpoint (preview)** and select the Azure Arc Private Link Scope created in Part 1 from the drop-down list.
+ 1. Under **Network Connectivity**, select **Private endpoint** and select the Azure Arc Private Link Scope created in Part 1 from the drop-down list.
:::image type="content" source="./media/private-link-security/arc-enabled-servers-create-script.png" alt-text="Selecting Private Endpoint connectivity option" border="true":::
It may take up to 15 minutes for the Private Link Scope to accept connections fr
## Troubleshooting
-1. Check your on-premises DNS server(s) to verify it is either forwarding to Azure DNS or is configured with appropriate A records in your private link zone. These lookup commands should return private IP addresses in your Azure virtual network. If they resolve public IP addresses, double check your machine or server and networkΓÇÖs DNS configuration.
+1. Check your on-premises DNS server(s) to verify it is either forwarding to Azure DNS or is configured with appropriate A records in your private link zone. These lookup commands should return private IP addresses in your Azure virtual network. If they resolve public IP addresses, double check your machine or server and network's DNS configuration.
``` nslookup gbl.his.arc.azure.com nslookup agentserviceapi.guestconfiguration.azure.com ```
-1. If you are having trouble onboarding a machine or server, confirm that youΓÇÖve added the Azure Active Directory and Azure Resource Manager service tags to your local network firewall. The agent needs to communicate with these services over the internet until private endpoints are available for these services.
+1. If you are having trouble onboarding a machine or server, confirm that you've added the Azure Active Directory and Azure Resource Manager service tags to your local network firewall. The agent needs to communicate with these services over the internet until private endpoints are available for these services.
## Next steps
azure-functions Durable Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-versions.md
# Durable Functions versions overview
-*Durable Functions* is an extension of [Azure Functions](../functions-overview.md) and [Azure WebJobs](../../app-service/webjobs-create.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. If you are not already familiar with Durable Functions, see the [overview documentation](durable-functions-overview.md).
+*Durable Functions* is an extension of [Azure Functions](../functions-overview.md) and [Azure WebJobs](../../app-service/webjobs-create.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. If you aren't already familiar with Durable Functions, see the [overview documentation](durable-functions-overview.md).
## New features in 2.x
This section describes the features of Durable Functions that are added in versi
In Durable Functions 2.x, we introduced a new [entity functions](durable-functions-entities.md) concept.
-Entity functions define operations for reading and updating small pieces of state, known as *durable entities*. Like orchestrator functions, entity functions are functions with a special trigger type, *entity trigger*. Unlike orchestrator functions, entity functions do not have any specific code constraints. Entity functions also manage state explicitly rather than implicitly representing state via control flow.
+Entity functions define operations for reading and updating small pieces of state, known as *durable entities*. Like orchestrator functions, entity functions are functions with a special trigger type, *entity trigger*. Unlike orchestrator functions, entity functions don't have any specific code constraints. Entity functions also manage state explicitly rather than implicitly representing state via control flow.
To learn more, see the [durable entities](durable-functions-entities.md) article.
To update the extension bundle version in your project, open host.json and updat
Update your .NET project to use the latest version of the [Durable Functions bindings extension](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask).
-See [Register Azure Functions binding extensions](../functions-bindings-register.md#local-csharp) for more information.
+See [Register Azure Functions binding extensions](../functions-develop-vs.md?tabs=in-process#add-bindings) for more information.
### Update your code
-Durable Functions 2.x introduces several breaking changes. Durable Functions 1.x applications are not compatible with Durable Functions 2.x without code changes. This section lists some of the changes you must make when upgrading your version 1.x functions to 2.x.
+Durable Functions 2.x introduces several breaking changes. Durable Functions 1.x applications aren't compatible with Durable Functions 2.x without code changes. This section lists some of the changes you must make when upgrading your version 1.x functions to 2.x.
#### Host.json schema
Durable Functions 2.x uses a new host.json schema. The main changes from 1.x inc
* `"storageProvider"` (and the `"azureStorage"` subsection) for storage-specific configuration. * `"tracing"` for tracing and logging configuration.
-* `"notifications"` (and the `"eventGrid"` subsection) for event grid notification configuration.
+* `"notifications"` (and the `"eventGrid"` subsection) for Event Grid notification configuration.
See the [Durable Functions host.json reference documentation](durable-functions-bindings.md#durable-functions-2-0-host-json) for details.
-#### Default taskhub name changes
+#### Default task hub name changes
-In version 1.x, if a task hub name was not specified in host.json, it was defaulted to "DurableFunctionsHub". In version 2.x, the default task hub name is now derived from the name of the function app. Because of this, if you have not specified a task hub name when upgrading to 2.x, your code will be operating with new task hub, and all in-flight orchestrations will no longer have an application processing them. To work around this, you can either explicitly set your task hub name to the v1.x default of "DurableFunctionsHub", or you can follow our [zero-downtime deployment guidance](durable-functions-zero-downtime-deployment.md) for details on how to handle breaking changes for in-flight orchestrations.
+In version 1.x, if a task hub name wasn't specified in host.json, it was defaulted to "DurableFunctionsHub". In version 2.x, the default task hub name is now derived from the name of the function app. Because of this, if you haven't specified a task hub name when upgrading to 2.x, your code will be operating with new task hub, and all in-flight orchestrations will no longer have an application processing them. To work around this, you can either explicitly set your task hub name to the v1.x default of "DurableFunctionsHub", or you can follow our [zero-downtime deployment guidance](durable-functions-zero-downtime-deployment.md) for details on how to handle breaking changes for in-flight orchestrations.
#### Public interface changes (.NET only)
In Durable Functions 1.x, the orchestration client binding uses a `type` of `orc
#### Raise event changes
-In Durable Functions 1.x, calling the [raise event](durable-functions-external-events.md#send-events) API and specifying an instance that did not exist resulted in a silent failure. Starting in 2.x, raising an event to a non-existent orchestration results in an exception.
+In Durable Functions 1.x, calling the [raise event](durable-functions-external-events.md#send-events) API and specifying an instance that didn't exist resulted in a silent failure. Starting in 2.x, raising an event to a non-existent orchestration results in an exception.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
In version 2.x and later versions of the Functions runtime, configures app behav
In version 2.x and later versions of the Functions runtime, application settings can override [host.json](functions-host-json.md) settings in the current environment. These overrides are expressed as application settings named `AzureFunctionsJobHost__path__to__setting`. For more information, see [Override host.json values](functions-host-json.md#override-hostjson-values).
+## AzureFunctionsWebHost__hostid
+
+Sets the host ID for a given function app, which should be a unique ID. This setting overrides the automatically generated host ID value for your app. Use this setting only when you need to prevent host ID collisions between function apps that share the same storage account.
+
+A host ID must be between 1 and 32 characters, contain only lowercase letters, numbers, and dashes, not start or end with a dash, and not contain consecutive dashes. An easy way to generate an ID is to take a GUID, remove the dashes, and make it lower case, such as by converting the GUID `1835D7B5-5C98-4790-815D-072CC94C6F71` to the value `1835d7b55c984790815d072cc94c6f71`.
+
+|Key|Sample value|
+|||
+|AzureFunctionsWebHost__hostid|`myuniquefunctionappname123456789`|
+
+For more information, see [Host ID considerations](storage-considerations.md#host-id-considerations).
+ ## AzureWebJobsDashboard Optional storage account connection string for storing logs and displaying them in the **Monitor** tab in the portal. This setting is only valid for apps that target version 1.x of the Azure Functions runtime. The storage account must be a general-purpose one that supports blobs, queues, and tables. To learn more, see [Storage account requirements](storage-considerations.md#storage-account-requirements).
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
Title: Register Azure Functions binding extensions description: Learn to register an Azure Functions binding extension based on your environment. Previously updated : 09/14/2020 Last updated : 03/19/2022 # Register Azure Functions binding extensions Starting with Azure Functions version 2.x, the functions runtime only includes HTTP and timer triggers by default. Other [triggers and bindings](./functions-triggers-bindings.md) are available as separate packages.
-.NET class library functions apps use bindings that are installed in the project as NuGet packages. Extension bundles allows non-.NET functions apps to use the same bindings without having to deal with the .NET infrastructure.
+.NET class library functions apps use bindings that are installed in the project as NuGet packages. Extension bundles allow non-.NET functions apps to use the same bindings without having to deal with the .NET infrastructure.
The following table indicates when and how you register bindings.
The following table indicates when and how you register bindings.
|-||| |Azure portal|Automatic|Automatic<sup>*</sup>| |Non-.NET languages|Automatic|Use [extension bundles](#extension-bundles) (recommended) or [explicitly install extensions](#explicitly-install-extensions)|
-|C# class library using Visual Studio|[Use NuGet tools](#vs)|[Use NuGet tools](#vs)|
-|C# class library using Visual Studio Code|N/A|[Use .NET Core CLI](#vs-code)|
+|C# class library using Visual Studio|[Use NuGet tools](functions-develop-vs.md#add-bindings)|[Use NuGet tools](functions-develop-vs.md#add-bindings)|
+|C# class library using Visual Studio Code|N/A|[Use .NET Core CLI](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions)|
-<sup>*</sup> Portal uses extension bundles.
+<sup>*</sup> Portal uses extension bundles, including C# script.
-## Access extensions in non-.NET languages
+## <a name="extension-bundles"></a>Extension bundles
-For Java, JavaScript, PowerShell, Python, and Custom Handler function apps, we recommended using extension bundles to access bindings. In cases where extension bundles cannot be used, you can explicitly install binding extensions.
+By default, extension bundles are used by Java, JavaScript, PowerShell, Python, C# script, and Custom Handler function apps to work with binding extensions. In cases where extension bundles can't be used, you can explicitly install binding extensions with your function app project. Extension bundles are supported for version 2.x and later version of the Functions runtime.
-### <a name="extension-bundles"></a>Extension bundles
+Extension bundles are a way to add a pre-defined set of compatible set of binding extensions to your function app. Extension bundles are versioned. Each version contains a specific set of binding extensions that are verified to work together. Select a bundle version based on the extensions that you need in your app.
-Extension bundles is a way to add a compatible set of binding extensions to your function app. You enable extension bundles in the app's *host.json* file.
+When you create a non-.NET Functions project from tooling or in the portal, extension bundles are already enabled in the app's *host.json* file.
-You can use extension bundles with version 2.x and later versions of the Functions runtime.
-
-Extension bundles are versioned. Each version contains a specific set of binding extensions that are verified to work together. Select a bundle version based on the extensions that you need in your app.
-
-To add an extension bundle to your function app, add the `extensionBundle` section to *host.json*. In many cases, Visual Studio Code and Azure Functions Core Tools will automatically add it for you.
+An extension bundle reference is defined by the `extensionBundle` section in a *host.json* as follows:
[!INCLUDE [functions-extension-bundles-json](../../includes/functions-extension-bundles-json.md)]
The following table lists the currently available versions of the default *Micro
| 2.x | `[2.*, 3.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/v2.x/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle | | 3.x | `[3.3.0, 4.0.0)` | See [extensions.json](https://github.com/Azure/azure-functions-extension-bundles/blob/4f5934a18989353e36d771d0a964f14e6cd17ac3/src/Microsoft.Azure.Functions.ExtensionBundle/extensions.json) used to generate the bundle<sup>1</sup> |
-<sup>1</sup> Version 3.x of the extension bundle currently does not include the [Table Storage bindings](./functions-bindings-storage-table.md). If your app requires Table Storage, you will need to continue using the 2.x version for now.
+<sup>1</sup> Version 3.x of the extension bundle currently doesn't include the [Table Storage bindings](./functions-bindings-storage-table.md). If your app requires Table Storage, you'll need to continue using the 2.x version for now.
> [!NOTE] > While you can a specify custom version range in host.json, we recommend you use a version value from this table.
-### Explicitly install extensions
-
-If you aren't able to use extension bundles, you can use Azure Functions Core Tools locally to install the specific extension packages required by your project.
-
-> [!IMPORTANT]
-> You can't explicitly install extensions in a function app that is using extension bundles. Remove the `extensionBundle` section in *host.json* before explicitly installing extensions.
-
-The following items describe some reasons you might need to install extensions manually:
-
-* You need to access a specific version of an extension not available in a bundle.
-* You need to access a custom extension not available in a bundle.
-* You need to access a specific combination of extensions not available in a single bundle.
-
-> [!NOTE]
-> To manually install extensions by using Core Tools, you must have the [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) installed. The .NET Core SDK is used by Azure Functions Core Tools to install extensions from NuGet. You don't need to know .NET to use Azure Functions extensions.
-
-When you explicitly install extensions, a .NET project file named extensions.csproj is added to the root of your project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit the file.
-
-There are several ways to use Core Tools to install the required extensions in your local project.
-
-#### Install all extensions
-
-Use the following command to automatically add all extension packages used by the bindings in your local project:
-
-```command
-func extensions install
-```
-
-The command reads the *function.json* file to see which packages you need, installs them, and rebuilds the extensions project (extensions.csproj). It adds any new bindings at the current version but does not update existing bindings. Use the `--force` option to update existing bindings to the latest version when installing new ones. To learn more, see the [`func extensions install` command](functions-core-tools-reference.md#func-extensions-install).
-
-If your function app uses bindings that Core Tools does not recognize, you must manually install the specific extension.
-
-#### Install a specific extension
-
-Use the following command to install a specific extension package at a specific version, in this case the Storage extension:
-
-```command
-func extensions install --package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0
-```
-
-To learn more, see the [`func extensions install` command](functions-core-tools-reference.md#func-extensions-install).
-
-## <a name="local-csharp"></a>Install extensions from NuGet in .NET languages
-
-For a C# class library-based functions project, you should install extensions directly. Extension bundles is designed specifically for projects that aren't C# class library-based.
-
-### <a name="vs"></a> C\# class library with Visual Studio
-
-In **Visual Studio**, you can install packages from the Package Manager Console using the [Install-Package](/nuget/tools/ps-ref-install-package) command, as shown in the following example:
-
-```powershell
-Install-Package Microsoft.Azure.WebJobs.Extensions.ServiceBus -Version <TARGET_VERSION>
-```
-
-The name of the package used for a given binding is provided in the reference article for that binding.
-
-Replace `<TARGET_VERSION>` in the example with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to Functions runtime 1.x or 2.x are specified in the reference article for the binding.
-
-If you use `Install-Package` to reference a binding, you don't need to use [extension bundles](#extension-bundles). This approach is specific for class libraries built in Visual Studio.
-
-### <a name="vs-code"></a> C# class library with Visual Studio Code
-
-In **Visual Studio Code**, install packages for a C# class library project from the command prompt using the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the .NET Core CLI. The following example demonstrates how you add a binding:
-
-```terminal
-dotnet add package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
-```
+## Explicitly install extensions
-The .NET Core CLI can only be used for Azure Functions 2.x development.
+For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings).
-Replace `<BINDING_TYPE_NAME>` with the name of the package that contains the binding you need. You can find the desired binding reference article in the [list of supported bindings](./functions-triggers-bindings.md#supported-bindings).
+For non-.NET languages and C# script, when you can't use extension bundles you need to manually install required binding extensions in your local project. The easiest way is to use Azure Functions Core Tools. To learn more, see [Install extensions](functions-run-local.md#install-extensions).
-Replace `<TARGET_VERSION>` in the example with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to Functions runtime 1.x or 2.x are specified in the reference article for the binding.
+For portal-only development, you need to manually create an extensions.csproj file in the root of your function app. To learn more, see [Manually install extensions](functions-how-to-use-azure-function-app-settings.md#manually-install-extensions).
## Next steps > [!div class="nextstepaction"]
azure-functions Functions Bindings Storage Queue Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md
See the [Example section](#example) for complete examples.
::: zone pivot="programming-language-csharp" The usage of the Queue output binding depends on the extension package version and the C# modality used in your function app, which can be one of the following:
-# [In-process class library](#tab/in-process)
+# [In-process](#tab/in-process)
An in-process class library is a compiled C# function runs in the same process as the Functions runtime.
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
Except for HTTP and timer triggers, bindings are implemented in extension packag
# [C\#](#tab/csharp)
-Run the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to install the extension packages that you need in your project. The following command installs the Azure Storage extension, which implements bindings for Blob, Queue, and Table storage.
+Run the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to install the extension packages that you need in your project. The following example demonstrates how you add a binding for an [in-process class library](functions-dotnet-class-library.md):
-```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 3.0.4
+```terminal
+dotnet add package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
```
+The following example demonstrates how you add a binding for an [isolated-process class library](dotnet-isolated-process-guide.md):
+
+```terminal
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE_NAME> --version <TARGET_VERSION>
+```
+
+In either case, replace `<BINDING_TYPE_NAME>` with the name of the package that contains the binding you need. You can find the desired binding reference article in the [list of supported bindings](./functions-triggers-bindings.md#supported-bindings).
+
+Replace `<TARGET_VERSION>` in the example with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to Functions runtime 1.x or 2.x are specified in the reference article for the binding.
+ # [Java](#tab/java) [!INCLUDE [functions-extension-bundles](../../includes/functions-extension-bundles.md)]
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
As with triggers, input and output bindings are added to your function as bindin
1. Make sure you've [configured the project for local development](#configure-the-project-for-local-development).
-2. Add the appropriate NuGet extension package for the specific binding.
+1. Add the appropriate NuGet extension package for the specific binding by finding the binding-specific NuGet package requirements in the reference article for the binding. For example, find package requirements for the Event Hubs trigger in the [Event Hubs binding reference article](functions-bindings-event-hubs.md).
- For more information, see [C# class library with Visual Studio](./functions-bindings-register.md#local-csharp). Find the binding-specific NuGet package requirements in the reference article for the binding. For example, find package requirements for the Event Hubs trigger in the [Event Hubs binding reference article](functions-bindings-event-hubs.md).
+1. Use the following command in the Package Manager Console to install a specific package:
+
+ # [In-process](#tab/in-process)
+
+ ```powershell
+ Install-Package Microsoft.Azure.WebJobs.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION>
+ ```
+
+ # [Isolated process](#tab/isolated-process)
+
+ ```powershell
+ Install-Package Microsoft.Azure.Functions.Worker.Extensions.<BINDING_TYPE> -Version <TARGET_VERSION>
+ ```
+
+
+
+ In this example replace `<BINDING_TYPE>` with the name specific to the binding extension and `<TARGET_VERSION>` with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to Functions runtime 1.x or 2.x are specified in the reference article for the binding.
3. If there are app settings that the binding needs, add them to the `Values` collection in the [local setting file](functions-develop-local.md#local-settings-file).
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
When you build the project, a folder structure that looks like the following exa
| - host.json ```
-This directory is what gets deployed to your function app in Azure. The binding extensions required in [version 2.x](functions-versions.md) of the Functions runtime are [added to the project as NuGet packages](./functions-bindings-register.md#vs).
+This directory is what gets deployed to your function app in Azure. The binding extensions required in [version 2.x](functions-versions.md) of the Functions runtime are [added to the project as NuGet packages](./functions-develop-vs.md?tabs=in-process#add-bindings).
> [!IMPORTANT] > The build process creates a *function.json* file for each function. This *function.json* file is not meant to be edited directly. You can't change binding configuration or disable the function by editing this file. To learn how to disable a function, see [How to disable functions](disable-function.md).
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
In this script, replace `<SUBSCRIPTION_ID>` and `<APP_NAME>` with the ID of your
+## Manually install extensions
+
+C# class library functions can include the NuGet packages for [binding extensions](functions-bindings-register.md) directly in the class library project. For other non-.NET languages and C# script, the recommended way to install extensions is either by [using extension bundles](functions-bindings-register.md#extension-bundles) or by [using Azure Functions Core Tools](functions-run-local.md#install-extensions) locally. If you can't use extension bundles and are only able to work in the portal, you need to use [Advanced Tools (Kudu)](#kudu) to manually create the extensions.csproj file directly in the site. Make sure to first remove the `extensionBundle` element from the host.json file.
+
+This same process works for any other file you need to add to your app.
+
+> [!IMPORTANT]
+> When possible, you shouldn't edit files directly in your function app in Azure. We recommend [downloading your app files locally](deployment-zip-push.md#download-your-function-app-files), using [Core Tools to install extensions](functions-run-local.md#install-extensions) and other packages, validating your changes, and then [republishing your app using Core Tools](functions-run-local.md#publish) or one of the other [supported deployment methods](functions-deployment-technologies.md#deployment-methods).
+
+The Functions editor built into the Azure portal lets you update your function code and configuration (function.json) files directly in the portal.
+
+1. Select your function app, then under **Functions** select **Functions**.
+1. Choose your function and select **Code + test** under **Developer**.
+1. Choose your file to edit and select **Save** when you're done.
+
+Files in the root of the app, such as function.proj or extensions.csproj need to be created and edited by using the [Advanced Tools (Kudu)](#kudu).
+
+1. Select your function app, then under **Development tools** select **Advanced tools** > **Go**.
+1. If promoted, sign-in to the SCM site with your Azure credentials.
+1. From the **Debug console** menu, choose **CMD**.
+1. Navigate to `.\site\wwwroot`, select the plus (**+**) button at the top, and select **New file**.
+1. Name the file, such as `extensions.csproj` and press Enter.
+1. Select the edit button next to the new file, add or update code in the file, and select **Save**.
+1. For a project file like extensions.csproj, run the following command to rebuild the extensions project:
+
+ ```bash
+ dotnet build extensions.csproj
+ ```
+ ## Platform features Function apps run in, and are maintained by, the Azure App Service platform. As such, your function apps have access to most of the features of Azure's core web hosting platform. When working in the [Azure portal](https://portal.azure.com), the left pane is where you access the many features of the App Service platform that you can use in your function apps.
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
For information on how to upload files to your function folder, see the section
The directory that contains the function script file is automatically watched for changes to assemblies. To watch for assembly changes in other directories, add them to the `watchDirectories` list in [host.json](functions-host-json.md). ## Using NuGet packages
-To use NuGet packages in a 2.x and later C# function, upload a *function.proj* file to the function's folder in the function app's file system. Here is an example *function.proj* file that adds a reference to *Microsoft.ProjectOxford.Face* version *1.1.0*:
+
+The way that both binding extension packages and other NuGet packages are added to your function app depends on the [targeted version of the Functions runtime](functions-versions.md).
+
+# [v2.x+](#tab/functionsv2)
+
+By default, the [supported set of Functions extension NuGet packages](functions-triggers-bindings.md#supported-bindings) are made available to your C# script function app by using extension bundles. To learn more, see [Extension bundles](functions-bindings-register.md#extension-bundles).
+
+If for some reason you can't use extension bundles in your project, you can also use the Azure Functions Core Tools to install extensions based on bindings defined in the function.json files in your app. When using Core Tools to register extensions, make sure to use the `--csx` option. To learn more, see [Install extensions](functions-run-local.md#install-extensions).
+
+By default, Core Tools reads the function.json files and adds the required packages to an *extensions.csproj* C# class library project file in the root of the function app's file system (wwwroot). Because Core Tools uses dotnet.exe, you can use it to add any NuGet package reference to this extensions file. During installation, Core Tools builds the extensions.csproj to install the required libraries. Here is an example *extensions.csproj* file that adds a reference to *Microsoft.ProjectOxford.Face* version *1.1.0*:
```xml <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> </PropertyGroup>- <ItemGroup> <PackageReference Include="Microsoft.ProjectOxford.Face" Version="1.1.0" /> </ItemGroup> </Project> ```
-To use a custom NuGet feed, specify the feed in a *Nuget.Config* file in the Function App root. For more information, see [Configuring NuGet behavior](/nuget/consume-packages/configuring-nuget-behavior).
+# [v1.x](#tab/functionsv1)
-> [!NOTE]
-> In 1.x C# functions, NuGet packages are referenced with a *project.json* file instead of a *function.proj* file.
-
-For 1.x functions, use a *project.json* file instead. Here is an example *project.json* file:
+Version 1.x of the Functions runtime uses a *project.json* file to define dependencies. Here is an example *project.json* file:
```json {
For 1.x functions, use a *project.json* file instead. Here is an example *projec
} ```
-### Using a function.proj file
+Extension bundles aren't supported by version 1.x.
++
-1. Open the function in the Azure portal. The logs tab displays the package installation output.
-2. To upload a *function.proj* file, use one of the methods described in the [How to update function app files](functions-reference.md#fileupdate) in the Azure Functions developer reference topic.
-3. After the *function.proj* file is uploaded, you see output like the following example in your function's streaming log:
+To use a custom NuGet feed, specify the feed in a *Nuget.Config* file in the function app root folder. For more information, see [Configuring NuGet behavior](/nuget/consume-packages/configuring-nuget-behavior).
-```
-2018-12-14T22:00:48.658 [Information] Restoring packages.
-2018-12-14T22:00:48.681 [Information] Starting packages restore
-2018-12-14T22:00:57.064 [Information] Restoring packages for D:\local\Temp\9e814101-fe35-42aa-ada5-f8435253eb83\function.proj...
-2016-04-04T19:02:50.511 Restoring packages for D:\home\site\wwwroot\HttpTriggerCSharp1\function.proj...
-2018-12-14T22:01:00.844 [Information] Installing Newtonsoft.Json 10.0.2.
-2018-12-14T22:01:01.041 [Information] Installing Microsoft.ProjectOxford.Common.DotNetStandard 1.0.0.
-2018-12-14T22:01:01.140 [Information] Installing Microsoft.ProjectOxford.Face.DotNetStandard 1.0.0.
-2018-12-14T22:01:09.799 [Information] Restore completed in 5.79 sec for D:\local\Temp\9e814101-fe35-42aa-ada5-f8435253eb83\function.proj.
-2018-12-14T22:01:10.905 [Information] Packages restored.
-```
+If you are working on your project only in the portal, you'll need to manually create the extensions.csproj file or a Nuget.Config file directly in the site. To learn more, see [Manually install extensions](functions-how-to-use-azure-function-app-settings.md#manually-install-extensions).
## Environment variables
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
Developing functions on your local computer and publishing them to Azure using C
## Prerequisites
-Azure Functions Core Tools currently depends on either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) for authenticating with your Azure account.
-This means that you must install one of these tools to be able to [publish to Azure](#publish) from Azure Functions Core Tools.
+The specific prerequisites for Core Tools depend on the features you plan to use:
+
+**[Publish](#publish)**: Core Tools currently depends on either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) for authenticating with your Azure account. This means that you must install one of these tools to be able to [publish to Azure](#publish) from Azure Functions Core Tools.
+
+**[Install extensions](#install-extensions)**: To manually install extensions by using Core Tools, you must have the [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) installed. The .NET Core SDK is used by Core Tools to install extensions from NuGet. You don't need to know .NET to use Azure Functions extensions.
## <a name="v2"></a>Core Tools versions
There are no additional considerations for PowerShell.
## Register extensions
-Starting with runtime version 2.x, Functions triggers and bindings are implemented as .NET extension (NuGet) packages. For compiled C# projects, you simply reference the NuGet extension packages for the specific triggers and bindings you are using. HTTP bindings and timer triggers don't require extensions.
-
-To improve the development experience for non-C# projects, Functions lets you reference a versioned extension bundle in your host.json project file. [Extension bundles](functions-bindings-register.md#extension-bundles) makes all extensions available to your app and removes the chance of having package compatibility issues between extensions. Extension bundles also removes the requirement of installing the .NET Core 3.1 SDK and having to deal with the extensions.csproj file.
+Starting with runtime version 2.x, [Functions triggers and bindings](functions-triggers-bindings.md) are implemented as .NET extension (NuGet) packages. For compiled C# projects, you simply reference the NuGet extension packages for the specific triggers and bindings you are using. HTTP bindings and timer triggers don't require extensions.
-Extension bundles is the recommended approach for functions projects other than C# complied projects. For these projects, the extension bundle setting is generated in the _host.json_ file during initialization. If this works for you, you can skip this entire section.
+To improve the development experience for non-C# projects, Functions lets you reference a versioned extension bundle in your host.json project file. [Extension bundles](functions-bindings-register.md#extension-bundles) makes all extensions available to your app and removes the chance of having package compatibility issues between extensions. Extension bundles also removes the requirement of installing the .NET Core 3.1 SDK and having to deal with the extensions.csproj file.
-### Use extension bundles
+Extension bundles is the recommended approach for functions projects other than C# complied projects, as well as C# script. For these projects, the extension bundle setting is generated in the _host.json_ file during initialization. If bundles aren't enabled, you need to update the project's host.json file.
[!INCLUDE [Register extensions](../../includes/functions-extension-bundles.md)]
- When supported by your language, extension bundles should already be enabled after you call `func init`. You should add extension bundles to the host.json before you add bindings to the function.json file. To learn more, see [Register Azure Functions binding extensions](functions-bindings-register.md#extension-bundles).
+To learn more, see [Register Azure Functions binding extensions](functions-bindings-register.md#extension-bundles).
-### Explicitly install extensions
-
-There may be cases in a non-.NET project when you can't use extension bundles, such as when you need to target a specific version of an extension not in the bundle. In these rare cases, you can use Core Tools to install locally the specific extension packages required by your project. To learn more, see [Explicitly install extensions](functions-bindings-register.md#explicitly-install-extensions).
+There may be cases in a non-.NET project when you can't use extension bundles, such as when you need to target a specific version of an extension not in the bundle. In these rare cases, you can use Core Tools to locally install the specific extension packages required by your project. To learn more, see [Install extensions](#install-extensions).
[!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)]
func kubernetes deploy --name <DEPLOYMENT_NAME> --registry <REGISTRY_USERNAME>
To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
+## Install extensions
+
+If you aren't able to use [extension bundles](functions-bindings-register.md#extension-bundles), you can use Azure Functions Core Tools locally to install the specific extension packages required by your project.
+
+> [!IMPORTANT]
+> You can't explicitly install extensions in a function app with extension bundles enabled. First, remove the `extensionBundle` section in *host.json* before explicitly installing extensions.
+
+The following items describe some reasons you might need to install extensions manually:
+
+* You need to access a specific version of an extension not available in a bundle.
+* You need to access a custom extension not available in a bundle.
+* You need to access a specific combination of extensions not available in a single bundle.
+
+When you explicitly install extensions, a .NET project file named extensions.csproj is added to the root of your project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit this C# project file.
+
+There are several ways to use Core Tools to install the required extensions in your local project.
+
+### Install all extensions
+
+Use the following command to automatically add all extension packages used by the bindings in your local project:
+
+```command
+func extensions install
+```
+
+The command reads the *function.json* file to see which packages you need, installs them, and rebuilds the extensions project (extensions.csproj). It adds any new bindings at the current version but doesn't update existing bindings. Use the `--force` option to update existing bindings to the latest version when installing new ones. To learn more, see the [`func extensions install` command](functions-core-tools-reference.md#func-extensions-install).
+
+If your function app uses bindings or NuGet packages that Core Tools does not recognize, you must manually install the specific extension.
+
+### Install a specific extension
+
+Use the following command to install a specific extension package at a specific version, in this case the Storage extension:
+
+```command
+func extensions install --package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0
+```
+
+You can use this command to install any compatible NuGet package. To learn more, see the [`func extensions install` command](functions-core-tools-reference.md#func-extensions-install).
+ ## Monitoring functions The recommended way to monitor the execution of your functions is by integrating with Azure Application Insights. You can also stream execution logs to your local computer. To learn more, see [Monitor Azure Functions](functions-monitoring.md).
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
This article details some of the differences between these versions, how you can
## Languages
-Starting with version 2.x, the runtime uses a language extensibility model, and all functions in a function app must share the same language. You chose the language of functions in your function app when you create the app. The language of your function app is maintained in the [FUNCTIONS\_WORKER\_RUNTIME](functions-app-settings.md#functions_worker_runtime) setting, and shouldn't be changed when there are existing functions.
+All functions in a function app must share the same language. You chose the language of functions in your function app when you create the app. The language of your function app is maintained in the [FUNCTIONS\_WORKER\_RUNTIME](functions-app-settings.md#functions_worker_runtime) setting, and shouldn't be changed when there are existing functions.
The following table indicates which programming languages are currently supported in each runtime version.
The following table indicates which programming languages are currently supporte
## <a name="creating-1x-apps"></a>Run on a specific version
-By default, function apps created in the Azure portal and by the Azure CLI are set to version 3.x. You can modify this version as needed. You can only downgrade the runtime version to 1.x after you create your function app but before you add any functions. Moving between 2.x and 3.x is allowed even with apps that have existing functions. Before moving an app with existing functions from 2.x to 3.x, be aware of any [breaking changes between 2.x and 3.x](#breaking-changes-between-2x-and-3x).
+By default, function apps created in the Azure portal and by the Azure CLI are set to version 4.x. You can modify this version if needed. You can only downgrade the runtime version to 1.x after you create your function app but before you add any functions. Moving to a later version is allowed even with apps that have existing functions. When your app has existing functions, be aware of any breaking changes between versions before moving to a later runtime version. The following sections detail changes between versions:
-Before making a change to the major version of the runtime, you should first test your existing code by deploying to another function app running on the latest major version. This testing helps to make sure it runs correctly after the upgrade.
++ [Between 3.x and 4.x](#breaking-changes-between-3x-and-4x) ++ [Between 2.x and 3.x](#breaking-changes-between-2x-and-3x)++ [Between 1.x and later versions](#migrating-from-1x-to-later-versions)
-Downgrades from v3.x to v2.x aren't supported. When possible, you should always run your apps on the latest supported version of the Functions runtime.
+Before making a change to the major version of the runtime, you should first test your existing code by deploying to another function app running on the latest major version. This testing helps to make sure it runs correctly after the upgrade. You can also verify your code locally by using the runtime-specific version of the [Azure Functions Core Tools](functions-run-local.md), which includes the Functions runtime.
+
+Downgrades to v2.x aren't supported. When possible, you should always run your apps on the latest supported version of the Functions runtime.
### Changing version of apps in Azure
The version of the Functions runtime used by published apps in Azure is dictated
| `~1` | 1.x | >[!IMPORTANT]
-> Don't arbitrarily change this setting, because other app setting changes and changes to your function code may be required.
+> Don't arbitrarily change this app setting, because other app setting changes and changes to your function code may be required. You should instead change this setting in the **Function runtime settings** tab of the function app **Configuration** in the Azure portal when you are ready to make a major version upgrade.
To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md). ### Pinning to a specific minor version
-To resolve issues with your function app running on the latest major version, you have to pin your app to a specific minor version. This gives you time to get your app running correctly on the latest major version. The way that you pin to a minor version differs between Windows and Linux. To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
+To resolve issues your function app may have when running on the latest major version, you have to temporatily pin your app to a specific minor version. This gives you time to get your app running correctly on the latest major version. The way that you pin to a minor version differs between Windows and Linux. To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
Older minor versions are periodically removed from Functions. For the latest news about Azure Functions releases, including the removal of specific older minor versions, monitor [Azure App Service announcements](https://github.com/Azure/app-service-announcements/issues).
Older minor versions are periodically removed from Functions. For the latest new
.NET function apps running on version 2.x (`~2`) are automatically upgraded to run on .NET Core 3.1, which is a long-term support version of .NET Core 3. Running your .NET functions on .NET Core 3.1 allows you to take advantage of the latest security updates and product enhancements.
-Any function app pinned to `~2.0` continues to run on .NET Core 2.2, which no longer receives security and other updates. To learn more, see [Functions v2.x considerations](functions-dotnet-class-library.md#functions-v2x-considerations).
+Any function app pinned to `~2.0` continues to run on .NET Core 2.2, which no longer receives security and other updates. To learn more, see [Functions v2.x considerations](functions-dotnet-class-library.md#functions-v2x-considerations).
+
+## Minimum extension versions
+
+There's technically not a correlation between binding extension versions and the Functions runtime version. However, starting with version 4.x the Functions runtime enforces a minimum version for all trigger and binding extensions.
+
+If you receive a warning about a package not meeting a minimum required version, you should update that NuGet package to the minimum version as you normally would. The minimum version requirements for extensions used in Functions v4.x can be found in [this configuration file](https://github.com/Azure/azure-functions-host/blob/v4.x/src/WebJobs.Script/extensionrequirements.json).
+
+For C# script, update the extension bundle reference in the host.json as follows:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[2.*, 3.0.0)"
+ }
+}
+```
+
+There's technically not a correlation between extension bundle versions and the Functions runtime version. However, starting with version 4.x the Functions runtime enforces a minimum version for extension bundles.
+
+If you receive a warning about your extension bundle version not meeting a minimum required version, update your existing extension bundle reference in the host.json as follows:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[2.*, 3.0.0)"
+ }
+}
+```
+
+To learn more about extension bundles, see [Extension bundles](functions-bindings-register.md#extension-bundles).
## <a name="migrating-from-3x-to-4x"></a>Migrating from 3.x to 4.x
-Azure Functions version 4.x is highly backwards compatible to version 3.x. Many apps should safely upgrade to 4.x without significant code changes. Be sure to test extensively before changing the major version in production apps.
+Azure Functions version 4.x is highly backwards compatible to version 3.x. Many apps should safely upgrade to 4.x without significant code changes. Be sure to fully test your app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md) or [in a staging slot](functions-deployment-slots.md) before changing the major version in production apps.
### Upgrading an existing app
The following are some changes to be aware of before upgrading a 3.x app to 4.x.
- Azure Functions Proxies are no longer supported in 4.x. You are recommended to use [Azure API Management](../api-management/import-function-app-as-api.md). -- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You are recommended to use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923))
+- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You should instead use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923))
-- Azure Functions 4.x enforces [minimum version requirements](https://github.com/Azure/Azure-Functions/issues/1987) for extensions. Upgrade to the latest version of affected extensions. For non-.NET languages, [upgrade](./functions-bindings-register.md#extension-bundles) to extension bundle version 2.x or later. ([#1987](https://github.com/Azure/Azure-Functions/issues/1987))
+- Azure Functions 4.x now enforces [minimum version requirements for extensions](#minimum-extension-versions). Upgrade to the latest version of affected extensions. For non-.NET languages, [upgrade](./functions-bindings-register.md#extension-bundles) to extension bundle version 2.x or later. ([#1987](https://github.com/Azure/Azure-Functions/issues/1987))
-- Default and maximum timeouts are now enforced in 4.x Linux consumption function apps. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))
+- Default and maximum timeouts are now enforced in 4.x for function app running on Linux in a Consumption plan. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))
-- Azure Functions 4.x uses Azure.Identity and Azure.Security.KeyVault.Secrets for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. See the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories) for more information on how to configure function app settings. ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
+- Azure Functions 4.x uses Azure.Identity and Azure.Security.KeyVault.Secrets for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. For more information about how to configure function app settings, see the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories). ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
-- Function apps that share storage accounts will fail to start if their computed hostnames are the same. Use a separate storage account for each function app. ([#2049](https://github.com/Azure/Azure-Functions/issues/2049))
+- Function apps that share storage accounts now fail to start when their host IDs are the same. For more information, see [Host ID considerations](storage-considerations.md#host-id-considerations). ([#2049](https://github.com/Azure/Azure-Functions/issues/2049))
::: zone pivot="programming-language-csharp"
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md
The Start/Stop VMs v2 (preview) feature starts or stops Azure virtual machines (
This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
+> [!NOTE]
+> Automatic updating functionality was introduced on April 28th, 2022. This new auto update feature helps you stay on the latest version of the solution. This feature is enabled by default when you perform a new installation.
+> If you deployed your solution before this date, you can reinstall to the latest version from our [GitHub repository](https://github.com/microsoft/startstopv2-deployments)
+
## Overview Start/Stop VMs v2 (preview) is redesigned and it doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [previous version](../../automation/automation-solution-vm-management.md). This version relies on [Azure Functions](../../azure-functions/functions-overview.md) to handle the VM start and stop execution.
An HTTP trigger endpoint function is created to support the schedule and sequenc
|CostAnalyticsFunction |Timer |This function calculates the cost to run the Start/Stop V2 solution on a monthly basis.| |SavingsAnalyticsFunction |Timer |This function calculates the total savings achieved by the Start/Stop V2 solution on a monthly basis.| |VirtualMachineSavingsFunction |Queue |This function performs the actual savings calculation on a VM achieved by the Start/Stop V2 solution.|
+|TriggerAutoUpdate |Timer |This function starts the auto update process based on the application setting "**EnableAutoUpdate=true**".|
+|UpdateStartStopV2 |Queue |This function performs the actual auto update execution, which validates your current version with the available version and decides the final action.|
For example, **Scheduled** HTTP trigger function is used to handle schedule and sequence scenarios. Similarly, **AutoStop** HTTP trigger function handles the auto stop scenario.
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Title: Storage considerations for Azure Functions description: Learn about the storage requirements of Azure Functions and about encrypting stored data. Previously updated : 11/09/2021 Last updated : 04/21/2022 # Storage considerations for Azure Functions
The storage account connection string must be updated when you regenerate storag
It's possible for multiple function apps to share the same storage account without any issues. For example, in Visual Studio you can develop multiple apps using the Azure Storage Emulator. In this case, the emulator acts like a single storage account. The same storage account used by your function app can also be used to store your application data. However, this approach isn't always a good idea in a production environment.
+You may need to use separate store accounts to [avoid host ID collisions](#avoiding-host-id-collisions).
+ ### Lifecycle management policy considerations Functions uses Blob storage to persist important information, such as [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). When you apply a [lifecycle management policy](../storage/blobs/lifecycle-management-overview.md) to your Blob Storage account, the policy may remove blobs needed by the Functions host. Because of this, you shouldn't apply such policies to the storage account used by Functions. If you do need to apply such a policy, remember to exclude containers used by Functions, which are usually prefixed with `azure-webjobs` or `scm`.
When all customer data must remain within a single region, the storage account a
Other platform-managed customer data is only stored within the region when hosting in an internally load-balanced App Service Environment (ASE). To learn more, see [ASE zone redundancy](../app-service/environment/zone-redundancy.md#in-region-data-residency).
+## Host ID considerations
+
+Functions uses a host ID value as a way to uniquely identify a particular function app in stored artifacts. By default, this ID is auto-generated from the name of the function app, truncated to the first 32 characters. This ID is then used when storing per-app correlation and tracking information in the linked storage account. When you have function apps with names longer than 32 characters and when the first 32 characters are identical, this truncation can result in duplicate host ID values. When two function apps with identical host IDs use the same storage account, you get a host ID collision because stored data can't be uniquely linked to the correct function app.
+
+Starting with version 3.x of the Functions runtime, host ID collision is detected and a warning is logged. In version 4.x, an error is logged and the host is stopped, resulting in a hard failure. More details about host ID collision can be found in [this issue](https://github.com/Azure/azure-functions-host/issues/2015).
+
+### Avoiding host ID collisions
+
+You can use the following strategies to avoid host ID collisions:
+++ Use a separated storage account for each function app involved in the collision.++ Rename one of your function apps to a value less than 32 characters in length, which changes the computed host ID for the app and removes the collision.++ Set an explicit host ID for one or more of the colliding apps. To learn more, see [Host ID override](#override-the-host-id).+
+> [!IMPORTANT]
+> Changing the storage account associated with an existing function app or changing the app's host ID can impact the behavior of existing functions. For example, a Blob Storage trigger tracks whether it's processed individual blobs by writing receipts under a specific host ID path in storage. When the host ID changes or you point to a new storage account, previously processed blobs may be reprocessed.
+
+### Override the host ID
+
+You can explicitly set a specific host ID for your function app in the application settings by using the `AzureFunctionsWebHost__hostid` setting. For more information, see [AzureFunctionsWebHost__hostid](functions-app-settings.md#azurefunctionswebhost__hostid).
+
+To learn how to create app settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+ ## Create an app without Azure Files Azure Files is set up by default for Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is [deployed using an external package URL](./run-functions-from-deployment-package.md), the app content is served from a separate read-only file system, so Azure Files can be omitted if desired. In such cases, a writeable file system is provided, but it is not guaranteed to be shared with all function app instances.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
The following prerequisites must be met prior to installing the Azure Monitor ag
| Built-in Role | Scope(s) | Reason | |:|:|:|
-| <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Arc-enabled servers</li></ul> | To deploy the agent |
+| <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, scale sets</li><li>Arc-enabled servers</li></ul> | To deploy the agent |
| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy ARM templates | - For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises), you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first (at no added cost) - [Managed system identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) must be enabled on Azure virtual machines. This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal).
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
+
+ Title: Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent
+description: Guidance for troubleshooting rsyslog issues on Linux virtual machines, scale sets with Azure Monitor agent and Data Collection Rules.
+++ Last updated : 5/3/2022++++
+# Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent
+
+## Symptom
+**Syslog data is not uploading**: When inspecting the error logs at `/var/opt/microsoft/azuremonitoragent/log/mdsd.err`, you'll see entries about *Error while inserting item to Local persistent store…No space left on device* similar to the following snippet:
+
+```
+2021-11-23T18:15:10.9712760Z: Error while inserting item to Local persistent store syslog.error: IO error: No space left on device: While appending to file: /var/opt/microsoft/azuremonitoragent/events/syslog.error/000555.log: No space left on device
+```
+
+## Cause
+Linux AMA buffers events to `/var/opt/microsoft/azuremonitoragent/events` prior to ingestion. On a default Linux AMA install, this directory will take ~650MB of disk space at idle. The size on disk will increase when under sustained logging load. It will get cleaned up about every 60 seconds and will reduce back to ~650 MB when the load returns to idle.
+
+### Confirming the issue of Full Disk
+The `df` command shows almost no space available on `/dev/sda1`, as shown below:
+
+```
+$ df -h
+Filesystem Size Used Avail Use% Mounted on
+udev 63G 0 63G 0% /dev
+tmpfs 13G 720K 13G 1% /run
+/dev/sda1 29G 29G 481M 99% /
+tmpfs 63G 0 63G 0% /dev/shm
+tmpfs 5.0M 0 5.0M 0% /run/lock
+tmpfs 63G 0 63G 0% /sys/fs/cgroup
+/dev/sda15 105M 4.4M 100M 5% /boot/efi
+/dev/sdb1 251G 61M 239G 1% /mnt
+tmpfs 13G 0 13G 0% /run/user/1000
+```
+
+The `du` command can be used to inspect the disk to determine which files are causing the disk to be full. For example:
+
+```
+/var/log$ du -h syslog*
+6.7G syslog
+18G syslog.1
+```
+
+In some cases, `du` may not report any significantly large files/directories. It may be possible that a [file marked as (deleted) is taking up the space](https://unix.stackexchange.com/questions/182077/best-way-to-free-disk-space-from-deleted-files-that-are-held-open). This issue can happen when some other process has attempted to delete a file, but there remains a process with the file still open. The `lsof` command can be used to check for such files. In the example below, we see that `/var/log/syslog` is marked as deleted, but is taking up 3.6 GB of disk space. It hasn't been deleted because a process with PID 1484 still has the file open.
+
+```
+$ sudo lsof +L1
+COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
+none 849 root txt REG 0,1 8632 0 16764 / (deleted)
+rsyslogd 1484 syslog 14w REG 8,1 3601566564 0 35280 /var/log/syslog (deleted)
+```
+
+### Issue: rsyslog default configuration logs all facilities to /var/log/syslog
+On some popular distros (for example Ubuntu 18.04 LTS), rsyslog ships with a default configuration file (`/etc/rsyslog.d/50-default.conf`) which will log events from nearly all facilities to disk at `/var/log/syslog`.
+
+AMA doesn't rely on syslog events being logged to `/var/log/syslog`. Instead, it configures rsyslog to forward events over a socket directly to the azuremonitoragent service process (mdsd).
+
+#### Fix: Remove high-volume facilities from /etc/rsyslog.d/50-default.conf
+If you're sending a high log volume through rsyslog, consider modifying the default rsyslog config to avoid logging these events to this location `/var/log/syslog`. The events for this facility would still be forwarded to AMA because of the config in `/etc/rsyslog.d/10-azuremonitoragent.conf`.
+
+1. For example, to remove local4 events from being logged at `/var/log/syslog`, change this line in `/etc/rsyslog.d/50-default.conf` from this:
+ ```
+ *.*;auth,authpriv.none -/var/log/syslog
+ ```
+
+ To this (add local4.none;):
+
+ ```
+ *.*;local4.none;auth,authpriv.none -/var/log/syslog
+ ```
+2. `sudo systemctl restart rsyslog`
+
+### Issue: AMA Event Buffer is Filling Disk
+If you observe the `/var/opt/microsoft/azuremonitor/events` directory growing unbounded (10 GB or higher) and not reducing in size, [file a ticket](#file-a-ticket) with **Summary** as 'AMA Event Buffer is filling disk' and **Problem type** as 'I need help configuring data collection from a VM'.
+
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md
+
+ Title: Troubleshoot the Azure Monitor agent on Linux virtual machines and scale sets
+description: Guidance for troubleshooting issues on Linux virtual machines, scale sets with Azure Monitor agent and Data Collection Rules.
+++ Last updated : 5/3/2022++++
+# Troubleshooting guidance for the Azure Monitor agent on Linux virtual machines and scale sets
++
+## Basic troubleshooting steps
+Follow the steps below to troubleshoot the latest version of the Azure Monitor agent running on your Linux virtual machine:
+
+1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).**
+
+2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** blade from left menu > 'AzureMonitorLinuxAgent'should show up with Status: 'Provisioning succeeded'
+ 2. If you don't see the extension listed, check if machine can reach Azure and find the extension to install using the command below:
+ ```azurecli
+ az vm extension image list-versions --location <machine-region> --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor
+ ```
+ 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up as above, [uninstall and install the extension](./azure-monitor-agent-manage.md) again.
+ 4. Check if you see any errors in extension logs located at `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/` on your machine
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+
+3. **Verify that the agent is running**:
+ 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
+ ```Kusto
+ Heartbeat | where Category == "Azure Monitor Agent" and 'Computer' == "<computer-name>" | take 10
+ ```
+ 2. Check if the agent service is running
+ ```
+ systemctl status azuremonitoragent
+ ```
+ 3. Check if you see any errors in core agent logs located at `/var/opt/microsoft/azuremonitoragent/log/mdsd.*` on your machine
+ 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+
+4. **Verify that the DCR exists and is associated with the virtual machine:**
+ 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
+ 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the virtual machine listed here.
+ 3. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
+
+5. **Verify that agent was able to download the associated DCR(s) from AMCS service:**
+ 1. Check if you see the latest DCR downloaded at this location `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`
+ 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
++
+## Issues collecting Performance counters
+
+## Issues collecting Syslog
+Here's how AMA collects syslog events:
+
+- AMA installs an output configuration for the system syslog daemon during the installation process. The configuration file specifies the way events flow between the syslog daemon and AMA.
+- For `rsyslog` (most Linux distributions), the configuration file is `/etc/rsyslog.d/10-azuremonitoragent.conf`. For `syslog-ng`, the configuration file is `/etc/syslog-ng/conf.d/azuremonitoragent.conf`.
+- AMA listens to a UNIX domain socket to receive events from `rsyslog` / `syslog-ng`. The socket path for this communication is `/run/azuremonitoragent/default_syslog.socket`
+- The syslog daemon will use queues when AMA ingestion is delayed, or when AMA isn't reachable.
+- AMA ingests syslog events via the aforementioned socket and filters them based on facility / severity combination from DCR configuration in `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Any `facility` / `severity` not present in the DCR will be dropped.
+- AMA attempts to parse events in accordance with **RFC3164** and **RFC5424**. Additionally, it knows how to parse the message formats listed [here](./azure-monitor-agent-overview.md#data-sources-and-destinations).
+- AMA identifies the destination endpoint for Syslog events from the DCR configuration and attempts to upload the events.
+ > [!NOTE]
+ > AMA uses local persistency by default, all events received from `rsyslog` / `syslog-ng` are queued in `/var/opt/microsoft/azuremonitoragent/events` before being uploaded.
+
+- The quality of service (QoS) file `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos` provides CSV-format 15-minute aggregations of the processed events and contains the information on the amount of the processed syslog events in the given timeframe. **This file is useful in tracking Syslog event ingestion drops**.
+
+ For example, the below fragment shows that in the 15 minutes preceding 2022-02-28T19:55:23.5432920Z, the agent received 77 syslog events with facility daemon and level info and sent 77 of said events to the upload task. Additionally, the agent upload task received 77 and successfully uploaded all 77 of these daemon.info messages.
+
+ ```
+ #Time: 2022-02-28T19:55:23.5432920Z
+ #Fields: Operation,Object,TotalCount,SuccessCount,Retries,AverageDuration,AverageSize,AverageDelay,TotalSize,TotalRowsRead,TotalRowsSent
+ ...
+ MaRunTaskLocal,daemon.debug,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.info,15,15,0,60000,46.2,0,693,77,77
+ MaRunTaskLocal,daemon.notice,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.warning,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.error,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.critical,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.alert,15,15,0,60000,0,0,0,0,0
+ MaRunTaskLocal,daemon.emergency,15,15,0,60000,0,0,0,0,0
+ ...
+ MaODSRequest,https://e73fd5e3-ea2b-4637-8da0-5c8144b670c8_LogManagement,15,15,0,455067,476.467,0,7147,77,77
+ ```
+
+**Troubleshooting steps**
+1. Review the [generic Linux AMA troubleshooting steps](#basic-troubleshooting-steps) first. If agent is emitting heartbeats, proceed to step 2.
+2. The parsed configuration is stored at `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Check that Syslog collection is defined and the log destinations are the same as constructed in DCR UI / DCR JSON.
+ 1. If yes, proceed to step 3. If not, the issue is in the configuration workflow.
+ 2. Investigate `mdsd.err`,`mdsd.warn`, `mdsd.info` files under `/var/opt/microsoft/azuremonitoragent/log` for possible configuration errors.
+ 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog DCR not available' and **Problem type** as 'I need help configuring data collection from a VM'.
+3. Validate the layout of the Syslog collection workflow to ensure all necessary pieces are in place and accessible:
+ 1. For `rsyslog` users, ensure the `/etc/rsyslog.d/10-azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `rsyslog` daemon (syslog user).
+ 2. For `syslog-ng` users, ensure the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `syslog-ng` daemon (syslog user).
+ 3. Ensure the file `/run/azuremonitoragent/default_syslog.socket` exists and is accessible by `rsyslog` or `syslog-ng` respectively.
+ 4. Check for a corresponding drop in count of processed syslog events in `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos`. If such drop isn't indicated in the file, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog data dropped in pipeline' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+ 5. Check that syslog daemon queue isn't overflowing, causing the upload to fail, by referring the guidance here: [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](./azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md)
+4. To debug syslog events ingestion further, you can append trace flag **-T 0x2002** at the end of **MDSD_OPTIONS** in the file `/etc/default/azuremonitoragent`, and restart the agent:
+ ```
+ export MDSD_OPTIONS="-A -c /etc/opt/microsoft/azuremonitoragent/mdsd.xml -d -r $MDSD_ROLE_PREFIX -S $MDSD_SPOOL_DIRECTORY/eh -L $MDSD_SPOOL_DIRECTORY/events -e $MDSD_LOG_DIR/mdsd.err -w $MDSD_LOG_DIR/mdsd.warn -o $MDSD_LOG_DIR/mdsd.info -T 0x2002"
+ ```
+5. After the issue is reproduced with the trace flag on, you'll find more debug information in `/var/opt/microsoft/azuremonitoragent/log/mdsd.info`. Inspect the file for the possible cause of syslog collection issue, such as parsing / processing / configuration / upload errors.
+ > [!WARNING]
+ > Ensure to remove trace flag setting **-T 0x2002** after the debugging session, since it generates many trace statements that could fill up the disk more quickly or make visually parsing the log file difficult.
+6. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA fails to collect syslog events' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
++
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
+
+ Title: Troubleshoot the Azure Monitor agent on Windows Arc-enabled server
+description: Guidance for troubleshooting issues on Windows Arc-enabled server with Azure Monitor agent and Data Collection Rules.
+++ Last updated : 5/3/2022++++
+# Troubleshooting guidance for the Azure Monitor agent on Windows Arc-enabled server
++
+## Basic troubleshooting steps
+Follow the steps below to troubleshoot the latest version of the Azure Monitor agent running on your Windows Arc-enabled server:
+
+1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).**
+
+2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
+ 1. Open Azure portal > select your Arc-enabled server > Open **Settings** : **Extensions** blade from left menu > 'AzureMonitorWindowsAgent'should show up with Status: 'Succeeded'
+ 2. If not, check if machine can reach Azure and find the extension to install using the command below:
+ ```azurecli
+ az vm extension image list-versions --location <machine-region> --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor
+ ```
+ 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
+ 4. If not, check if you see any errors in extension logs located at `C:\ProgramData\GuestConfig\extension_logs\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
+ 5. If none of the above works, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
+3. **Verify that the agent is running**:
+ 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
+ ```Kusto
+ Heartbeat | where Category == "Azure Monitor Agent" and 'Computer' == "<computer-name>" | take 10
+ ```
+ 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
+ 3. If not, check if you see any errors in core agent logs located at `C:\Resources\Directory\AMADataStore\Configuration` on your machine
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
+4. **Verify that the DCR exists and is associated with the Arc-enabled server:**
+ 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
+ 2. On your Arc-enabled server, verify the existence of the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.latest.xml`. If this file doesn't exist, the Arc-enabled server may not be associated with a DCR.
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the Arc-enabled server listed here
+ 4. If not listed, click 'Add' and select your Arc-enabled server from the resource picker. Repeat across all DCRs.
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
+
+5. **Verify that agent was able to download the associated DCR(s) from AMCS service:**
+ 1. Check if you see the latest DCR downloaded at this location `C:\Resources\Directory\AMADataStore\mcs\configchunks`
+ 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
+## Issues collecting Performance counters
+1. Check that your DCR JSON contains a section for 'performanceCounters'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md).
+2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+3. Open the file and check if it contains `CounterSet` nodes as shown in the example below:
+ ```xml
+ <CounterSet storeType="Local" duration="PT1M"
+ eventName="c9302257006473204344_16355538690556228697"
+ sampleRateInSeconds="15" format="Factored">
+ <Counter>\Processor(_Total)\% Processor Time</Counter>
+ <Counter>\Memory\Committed Bytes</Counter>
+ <Counter>\LogicalDisk(_Total)\Free Megabytes</Counter>
+ <Counter>\PhysicalDisk(_Total)\Avg. Disk Queue Length</Counter>
+ </CounterSet>
+ ```
+ If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+++
+### Issues using 'Custom Metrics' as destination
+1. Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).
+2. Ensure that the associated DCR is correctly authored to collect performance counters and send them to Azure Monitor metrics. You should see this section in your DCR:
+ ```json
+ "destinations": {
+ "azureMonitorMetrics": {
+ "name":"myAmMetricsDest"
+ }
+ }
+ ```
+
+3. Run PowerShell command:
+ ```powershell
+ Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
+ ```
+
+ Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
+4. Verify `C:\Resources\Directory\AMADataStore\mcs\AuthToken-MSI.json` file is present.
+5. Verify `C:\Resources\Directory\AMADataStore\mcs\CUSTOMMETRIC_<subscription>_<region>_MonitoringAccount_Configuration.json` file is present.
+6. Collect logs by running the command `C:\Packages\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\<version-number>\Monitoring\Agent\table2csv.exe C:\Resources\Directory\AMADataStore\Tables\MaMetricsExtensionEtw.tsf`
+ 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
+ 2. Open it and look for any Level 2 errors and try to fix them.
+7. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to collect custom metrics' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
+## Issues collecting Windows event logs
+1. Check that your DCR JSON contains a section for 'windowsEventLogs'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md).
+2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+3. Open the file and check if it contains `Subscription` nodes as shown in the example below:
+ ```xml
+ <Subscription eventName="c9302257006473204344_14882095577508259570"
+ query="System!*[System[(Level = 1 or Level = 2 or Level = 3)]]">
+ <Column name="ProviderGuid" type="mt:wstr" defaultAssignment="00000000-0000-0000-0000-000000000000">
+ <Value>/Event/System/Provider/@Guid</Value>
+ </Column>
+ ...
+
+ </Column>
+ </Subscription>
+ ```
+ If there are no `Subscription` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
+
+ Title: Troubleshoot the Azure Monitor agent on Windows virtual machines and scale sets
+description: Guidance for troubleshooting issues on Windows virtual machines, scale sets with Azure Monitor agent and Data Collection Rules.
+++ Last updated : 5/3/2022++++
+# Troubleshooting guidance for the Azure Monitor agent on Windows virtual machines and scale sets
++
+## Basic troubleshooting steps
+Follow the steps below to troubleshoot the latest version of the Azure Monitor agent running on your Windows virtual machine:
+
+1. **Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).**
+
+2. **Verify that the extension was successfully installed and provisioned, which installs the agent binaries on your machine**:
+ 1. Open Azure portal > select your virtual machine > Open **Settings** : **Extensions + applications** blade from left menu > 'AzureMonitorWindowsAgent'should show up with Status: 'Provisioning succeeded'
+ 2. If not, check if machine can reach Azure and find the extension to install using the command below:
+ ```azurecli
+ az vm extension image list-versions --location <machine-region> --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor
+ ```
+ 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up.
+ 4. If not, check if you see any errors in extension logs located at `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
+3. **Verify that the agent is running**:
+ 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
+ ```Kusto
+ Heartbeat | where Category == "Azure Monitor Agent" and 'Computer' == "<computer-name>" | take 10
+ ```
+ 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up.
+ 3. If not, check if you see any errors in core agent logs located at `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Configuration` on your machine
+ 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
+4. **Verify that the DCR exists and is associated with the virtual machine:**
+ 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace.
+ 2. On your virtual machine, verify the existence of the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.latest.xml`. If this file doesn't exist:
+ - The virtual machine may not be associated with a DCR. See step 3
+ - The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable.
+ 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** blade from left menu > You should see the virtual machine listed here
+ 4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
+ 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
+
+5. **Verify that agent was able to download the associated DCR(s) from AMCS service:**
+ 1. Check if you see the latest DCR downloaded at this location `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\configchunks`
+ 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
+
+## Issues collecting Performance counters
+1. Check that your DCR JSON contains a section for 'performanceCounters'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md).
+2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+3. Open the file and check if it contains `CounterSet` nodes as shown in the example below:
+ ```xml
+ <CounterSet storeType="Local" duration="PT1M"
+ eventName="c9302257006473204344_16355538690556228697"
+ sampleRateInSeconds="15" format="Factored">
+ <Counter>\Processor(_Total)\% Processor Time</Counter>
+ <Counter>\Memory\Committed Bytes</Counter>
+ <Counter>\LogicalDisk(_Total)\Free Megabytes</Counter>
+ <Counter>\PhysicalDisk(_Total)\Avg. Disk Queue Length</Counter>
+ </CounterSet>
+ ```
+ If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+++
+### Issues using 'Custom Metrics' as destination
+1. Carefully review the [prerequisites here](./azure-monitor-agent-manage.md#prerequisites).
+2. Ensure that the associated DCR is correctly authored to collect performance counters and send them to Azure Monitor metrics. You should see this section in your DCR:
+ ```json
+ "destinations": {
+ "azureMonitorMetrics": {
+ "name":"myAmMetricsDest"
+ }
+ }
+ ```
+3. Run PowerShell command:
+ ```powershell
+ Get-WmiObject Win32_Process -Filter "name = 'MetricsExtension.Native.exe'" | select Name,ExecutablePath,CommandLine | Format-List
+ ```
+ Verify that the *CommandLine* parameter in the output contains the argument "-TokenSource MSI"
+4. Verify `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\AuthToken-MSI.json` file is present.
+5. Verify `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\CUSTOMMETRIC_<subscription>_<region>_MonitoringAccount_Configuration.json` file is present.
+6. Collect logs by running the command `C:\Packages\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\<version-number>\Monitoring\Agent\table2csv.exe C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MaMetricsExtensionEtw.tsf`
+ 1. The command will generate the file 'MaMetricsExtensionEtw.csv'
+ 2. Open it and look for any Level 2 errors and try to fix them.
+7. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to collect custom metrics' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
+## Issues collecting Windows event logs
+1. Check that your DCR JSON contains a section for 'windowsEventLogs'. If not, fix your DCR. See [how to create DCR](./data-collection-rule-azure-monitor-agent.md) or [sample DCR](./data-collection-rule-sample-agent.md).
+2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+3. Open the file and check if it contains `Subscription` nodes as shown in the example below:
+ ```xml
+ <Subscription eventName="c9302257006473204344_14882095577508259570"
+ query="System!*[System[(Level = 1 or Level = 2 or Level = 3)]]">
+ <Column name="ProviderGuid" type="mt:wstr" defaultAssignment="00000000-0000-0000-0000-000000000000">
+ <Value>/Event/System/Provider/@Guid</Value>
+ </Column>
+ ...
+
+ </Column>
+ </Subscription>
+ ```
+ If there are no `Subscription`, nodes then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+
+
+
++
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
When you move machines to the cloud, the monitoring requirements for their softw
Azure Monitor also doesn't measure the health of different applications and services running on a virtual machine. Metric alerts can automatically resolve when a value drops below a threshold, but Azure Monitor doesn't currently have the ability to define health criteria for applications and services running on the machine, nor does it provide health rollup to group the health of related components.
-> [!NOTE]
-> A new [guest health feature for VM insights](vm/vminsights-health-overview.md) is now in public preview and does alert based on the health state of a set of performance metrics. This is initially limited though to a specific set of performance counters related to the guest operating system and not applications or other workloads running in the virtual machine.
->
-> [![VM insights guest health](media/azure-monitor-operations-manager/vm-insights-guest-health.png)](media/azure-monitor-operations-manager/vm-insights-guest-health.png#lightbox)
- Monitoring the software on your machines in a hybrid environment will typically use a combination of VM insights and Operations Manager, depending on the requirements of each machine and on your maturity developing operational processes around Azure Monitor. The Microsoft Management Agent (referred to as the Log Analytics agent in Azure Monitor) is used by both platforms so that a single machine can be simultaneously monitored by both. > [!NOTE]
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
description: "Collect metrics and logs of Azure Arc-enabled Kubernetes clusters
| `dc.services.visualstudio.com` | 443 |
+- If you are using an Arc enabled cluster on AKS, and previously installed [monitoring for AKS](./container-insights-enable-existing-clusters.md), please ensure that you have [disabled monitoring](./container-insights-optout.md) before proceeding to avoid issues during the extension install
+ - If you had previously deployed Azure Monitor Container Insights on this cluster using script without cluster extensions, follow the instructions listed [here](container-insights-optout-hybrid.md) to delete this Helm chart. You can then continue to creating a cluster extension instance for Azure Monitor Container Insights.
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Key Vault |[Azure Key Vault logging](../../key-vault/general/logging.md) | | Azure Kubernetes Service |[Azure Kubernetes Service logging](../../aks/monitor-aks-reference.md#resource-logs) | | Azure Load Balancer |[Log Analytics for Azure Load Balancer](../../load-balancer/monitor-load-balancer.md) |
+| Azure Load Testing |[Azure Load Testing logs](../../load-testing/monitor-load-testing-reference.md#resource-logs) |
| Azure Logic Apps |[Logic Apps B2B custom tracking schema](../../logic-apps/logic-apps-track-integration-account-custom-tracking-schema.md) | | Azure Machine Learning | [Diagnostic logging in Azure Machine Learning](../../machine-learning/monitor-resource-reference.md) | | Azure Media Services | [Media Services monitoring schemas](/azure/media-services/latest/monitoring/monitor-media-services#schemas) |
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
description: Overview of how Azure Monitor is billed and how to estimate and ana
Previously updated : 03/28/2022 Last updated : 05/05/2022 # Azure Monitor cost and usage This **article** describes the different ways that Azure Monitor charges for usage, how to evaluate charges on your Azure bill, and how to estimate charges to monitor your entire environment.
Following is basic guidance that you can use for common resources.
The [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) includes data volume estimation calculators for these three cases.
+>[!NOTE]
+>The billable data volume is calculated using a customer friendly, cost-effective method. The billed data volume is defined as the size of the data that will be stored, excluding a set of standard columns and any JSON wrapper that was part of the data received for ingestion. This billable data volume is substantially smaller than the size of the entire JSON-packaged event, often less than 50%. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models. [Learn more](logs/cost-logs.md#data-size-calculation).
+ ## Estimate application usage There are two methods that you can use to estimate the amount of data from an application monitored with Application Insights.
azure-monitor Vminsights Health Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-alerts.md
description: Describes the alerts created by VM insights guest health including
Previously updated : 11/10/2020 Last updated : 05/03/2022 # VM insights guest health alerts (preview) VM insights guest health allows you to view the health of a virtual machine as defined by a set of performance measurements that are sampled at regular intervals. An alert can be created when a virtual machine or monitor changes to an unhealthy state. You can view and manage these alerts with [those created by alert rules in Azure Monitor](../alerts/alerts-overview.md) and choose to be proactively notified when a new alert is created. + ## Configure alerts You cannot create an explicit alert rule for VM insights guest health while this feature is in preview. By default, alerts will be created for each virtual machine but not for each monitor. This means that if a monitor changes to a state that doesn't affect the current state of the virtual machine, then no alert is created because the virtual machine state didn't change.
azure-monitor Vminsights Health Configure Dcr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-configure-dcr.md
description: Describes how to modify default monitoring in VM insights guest hea
Previously updated : 10/15/2020 Last updated : 05/03/2022 # Configure monitoring in VM insights guest health using data collection rules (preview) [VM insights guest health](vminsights-health-overview.md) allows you to view the health of a virtual machine as defined by a set of performance measurements that are sampled at regular intervals. This article describes how you can modify default monitoring across multiple virtual machines using data collection rules. + ## Monitors The health state of a virtual machine is determined by the [rollup of health](vminsights-health-overview.md#health-rollup-policy) from each of its monitors. There are two types of monitors in VM insights guest health as shown in the following table.
azure-monitor Vminsights Health Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-configure.md
description: Describes how to modify default monitoring for VM insights guest he
Previously updated : 12/14/2020 Last updated : 05/03/2022 # Configure monitoring in VM insights guest health (preview) VM insights guest health allows you to view the health of a virtual machine as defined by a set of performance measurements that are sampled at regular intervals. This article describes how you can modify default monitoring using the Azure portal. It also describes fundamental concepts of monitors required for [configuring monitoring using a data collection rule](vminsights-health-configure-dcr.md). ++ ## Open monitor configuration Open monitor configuration in the Azure portal by selecting the monitor and then the **Configuration** tab.
azure-monitor Vminsights Health Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-enable.md
description: Describes how to enable VM insights guest health in your subscripti
Previously updated : 04/05/2021 Last updated : 05/03/2022
# Enable VM insights guest health (preview) VM insights guest health allows you to view the health of a virtual machine as defined by a set of performance measurements that are sampled at regular intervals. This article describes how to enable this feature in your subscription and how to enable guest monitoring for each virtual machine. ++ ## Current limitations VM insights guest health has the following limitations in public preview:
azure-monitor Vminsights Health Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-migrate.md
+
+ Title: Migrate from VM insights guest health (preview) to Azure Monitor log alerts
+description: Describes how to migrate from VM insights guest health to Azure Monitor log alerts.
+ Last updated : 03/01/2022+++
+# Migrate from VM insights guest health to Azure Monitor log alerts
+This article walks through migrating from the VM insights guest health (preview) to Azure Monitor log alerts to configure alerts on key VM metrics and offboard VMs from VM insights guest health (preview). [VM insights guest health (preview)](vminsights-health-overview.md) will retire on 30 November 2022. If you are using this feature to configure alerts on VM metrics (CPU utilization, Available memory, Free disk space), make sure to transition to Azure Monitor log alerts before this date.
+
+## Configure Azure Monitor log alerts
+Before you remove VM insights guest health, you should create alert rules to replace its alerting functionality. See [Monitor virtual machines with Azure Monitor: Alerts](monitor-virtual-machine-alerts.md#log-alerts) for instructions on creating Azure Monitor log alerts.
+
+> [!IMPORTANT]
+> Transitioning to log alerts will result in charges according to Azure Monitor log alert rates. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for details.
++
+Alert rules for the key metrics used by VM health include the following:
+
+- [CPU utilization](monitor-virtual-machine-alerts.md#log-alert-rules)
+- [Available memory](monitor-virtual-machine-alerts.md#log-alert-rules-1)
+- [Free disk space](monitor-virtual-machine-alerts.md#log-query-alert-rules-1)
+
+To create a an alert rule for a single VM that alerts on any of the three conditions, create a [log alert rule](monitor-virtual-machine-alerts.md#log-alerts) with the following details.
++
+| Setting | Value |
+|:|:|
+| **Scope** | |
+| Target scope | Select your virtual machine. |
+| **Condition** | |
+| Signal type | Log |
+| Signal name | Custom log search |
+| Query | \<Use the query below\> |
+| Measurement | Measure: *Table rows*<br>Aggregation type: Count<br>Aggregation granularity: 15 minutes |
+| Alert Logic | Operator: Greater than<br>Threshold value: 0<br>Frequency of evaluation: 15 minutes |
+| Actions | Select or add an [action group](../alerts/action-groups.md) to notify you when the threshold is exceeded. |
+| **Details** | |
+| Severity| Warning |
+| Alert rule name | Daily data limit reached |
+
+Replace the values in the following query if you want to set different thresholds.
+
+```kusto
+InsightsMetrics
+| where TimeGenerated >= ago(15m)
+| where Origin == "vm.azm.ms"
+| where Namespace == "Processor" and Name == "UtilizationPercentage"
+| summarize UtilizationPercentage = avg(Val) by Computer, _ResourceId
+| where UtilizationPercentage >= 90
+| union (
+InsightsMetrics
+| where TimeGenerated >= ago(15m)
+| where Origin == "vm.azm.ms"
+| where Namespace == "Memory" and Name == "AvailableMB"
+| summarize AvailableMB = avg(Val) by Computer, _ResourceId
+| where AvailableMB <= 100)
+| union (
+InsightsMetrics
+| where Origin == "vm.azm.ms"
+| where TimeGenerated >= ago(15m)
+| where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage"
+| summarize FreeSpacePercentage = avg(Val) by Computer, _ResourceId
+| where FreeSpacePercentage <= 10)
+| summarize UtilizationPercentage = max(UtilizationPercentage), AvailableMB = max(AvailableMB), FreeSpacePercentage = max(FreeSpacePercentage) by Computer, _ResourceId
+```
+
+## Offboard VMs from VM insights guest health
+Use the following steps to offboard the VMs from the VM insights guest health (preview) feature. The **Health** tab and the **Guest VM Health** status in VM insights will not be available after retirement.
++
+### 1. Uninstall the VM extension for VM insights guest health
+The VM Extension is called *GuestHealthWindowsAgent* for Windows VMs and *GuestHealthLinuxAgent* for Linux VMs. You can remove the extension from the **Extensions + applications** page for the virtual machine in the Azure portal, [Azure PowerShell](/powershell/module/az.compute/remove-azvmextension), or [Azure CLI](/cli/azure/vm/extension#az-vm-extension-delete).
++
+### 2. Delete the Data Collection Rule Association created for VM insights guest health
+Before you can remove the data collection rule for VM insights guest health, you need to remove its association with any VMs. If the VM was onboarded to VM insights guest health using the Azure portal, a default DCR with a name similar to *Microsoft-VMInsights-Health-xxxxx* will have been created. If you onboarded with another method, you may have given the DCR a different name.
+
+From the **Monitor** menu in the Azure portal, select **Data Collection Rules**. Click on the DCR for VM insights guest health, and then select **Resources**. Select the VMs to remove and click **Delete**.
+
+You can also remove the Data Collection Rule Association using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#manage-rules-and-association-using-powershell) or [Azure CLI](/cli/azure/monitor/data-collection/rule/association#az-monitor-data-collection-rule-association-delete).
++
+### 3. Delete Data Collection Rule created for VM insights guest health
+To remove the data collection rule, click **Delete** from the DCR page in the Azure portal. You can also delete the Data Collection Rule using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#manage-rules-and-association-using-powershell) or [Azure CLI](/cli/azure/monitor/data-collection/rule#az-monitor-data-collection-rule-delete).
++
+## Next steps
+
+- [Read more about log query alerts for virtual machine](monitor-virtual-machine-alerts.md)
azure-monitor Vminsights Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-overview.md
description: Overview of the health feature in VM insights including how you can
Previously updated : 10/27/2020 Last updated : 05/03/2022 # VM insights guest health (preview) VM insights guest health allows you to view the health of virtual machines based on a set of performance measurements that are sampled at regular intervals from the guest operating system. You can quickly check the health of all virtual machines in a subscription or resource group, drill down on the detailed health of a particular virtual machine, or be proactively notified when a virtual machine becomes unhealthy. ++ ## Enable virtual machine health See [Enable VM insights guest health (preview)](vminsights-health-enable.md) for details on enabling the guest health feature and onboarding virtual machines.
azure-monitor Vminsights Health Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-troubleshoot.md
description: Describes troubleshooting steps that you can take when you have iss
Previously updated : 02/25/2021 Last updated : 05/03/2022 # Troubleshoot VM insights guest health (preview) This article describes troubleshooting steps that you can take when you have issues with VM insights health. + ## Installation errors
-If any of the following solutions do not solve your installation issue, collect VM Health agent log located at `/var/log/azure/Microsoft.Azure.Monitor.VirtualMachines.GuestHealthLinuxAgent/*.log` and contact Microsoft for further investigation.
+If any of the following solutions don't solve your installation issue, collect VM Health agent log located at `/var/log/azure/Microsoft.Azure.Monitor.VirtualMachines.GuestHealthLinuxAgent/*.log` and contact Microsoft for further investigation.
### Error message showing db5 error Your installation didn't succeed and your installation error message is similar to the following:
Your installation didn't succeed and your installation error message is similar
Exiting with the following error: "Failed to install VM Guest Health Agent: Init already exists: /etc/systemd/system/vmGuestHealthAgent.service"install vmGuestHealthAgent service execution failed with exit code 37 ```
-VM Health Agent will uninstall the existing service first before installing the current version. The reason for this error is likely because the previous service file didn't get cleaned up due to some reason. Login to the VM and run the following command backup existing service file and try re-install again.
+VM Health Agent will uninstall the existing service first before installing the current version. The reason for this error is likely because the previous service file didn't get cleaned up due to some reason. Log in to the VM and run the following command backup existing service file and try re-install again.
``` sudo mv /etc/systemd/system/vmGuestHealthAgent.service /etc/systemd/system/vmGuestHealthAgent.service.bak
Your installation didn't succeed and your installation error message is similar
``` Exiting with the following error: "Failed to install VM Guest Health Agent: exit status 1"install vmGuestHealthAgent service execution failed with exit code 37 ```
-This is likely because VM Guest Agent couldn't acquire the lock for the service file. Try to reboot your VM which will release the lock.
+This is likely because VM Guest Agent couldn't acquire the lock for the service file. Try to reboot your VM, which will release the lock.
## Upgrade errors ### Upgrade available message is still displayed after upgrading guest health -- Verify that VM is running in global Azure. Azure Arc-enabled servers are not yet supported.
+- Verify that VM is running in global Azure. Azure Arc-enabled servers aren't yet supported.
- Verify that the virtual machine's region and operating system version are supported as described in [Enable Azure Monitor for VMs guest health (preview)](vminsights-health-enable.md). - Verify that guest health extension installed successfully with 0 exit code. - Verify that Azure Monitor agent extension is installed successfully. - Verify that system-assigned managed identity is enabled for the virtual machine. - Verify that no user-assigned managed identities are specified for the virtual machine.-- Verify for Windows virtual machines that locale is *US English*. Localization is not currently supported by Azure Monitor agent.-- Verify that the virtual machine is not using the network proxy. Azure Monitor agent does not currently support proxies.
+- Verify for Windows virtual machines that locale is *US English*. Localization isn't currently supported by Azure Monitor agent.
+- Verify that the virtual machine isn't using the network proxy. Azure Monitor agent doesn't currently support proxies.
- Verify that the health extension agent started without errors. If the agent can't start, the agent's state may be corrupt. Delete the contents of the agent state folder and restart the agent. - For Linux: Daemon is *vmGuestHealthAgent*. State folder is */var/opt/vmGuestHealthAgent/** - For Windows: Service is *VM Guest Health agent*. State folder is _%ProgramData%\Microsoft\VMGuestHealthAgent\\*_. - Verify the Azure Monitor agent has network connectivity. - From the virtual machine, attempt to ping _\<region\>.handler.control.monitor.azure.com_. For example, for a virtual machine in westeurope, attempt to ping _westeurope.handler.control.monitor.azure.com:443_. - Verify that virtual machine has an association with a data collection rule in the same region as the Log Analytics workspace.
- - Refer to **Create data collection rule (DCR)** in [Enable Azure Monitor for VMs guest health (preview)](vminsights-health-enable.md) to ensure structure of the DCR is correct. Pay particular attention to presence of *performanceCounters* data source section set up to samples three counters and presence of *inputDataSources* section in health extension configuration to send counters to the extension.
+ - Refer to **Create data collection rule (DCR)** in [Enable Azure Monitor for VMs guest health (preview)](vminsights-health-enable.md) to ensure structure of the DCR is correct. Pay particular attention to presence of *performanceCounters* data source section setup to samples three counters and presence of *inputDataSources* section in health extension configuration to send counters to the extension.
- Check the virtual machine for guest health extension errors. - For Linux: Check logs at _/var/log/azure/Microsoft.Azure.Monitor.VirtualMachines.GuestHealthLinuxAgent/*.log_. - For Windows: Check logs at _C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Monitor.VirtualMachines.GuestHealthWindowsAgent\{extension version}\*.log_.
This is likely because VM Guest Agent couldn't acquire the lock for the service
#### Verify that the virtual machine meets configuration requirements
-1. Verify that the virtual machine is an Azure virtual machine. Azure Arc for servers is not currently supported.
+1. Verify that the virtual machine is an Azure virtual machine. Azure Arc for servers isn't currently supported.
2. Verify that the virtual machine is running a [supported operating system](vminsights-health-enable.md?current-limitations.md). 3. Verify that the virtual machine is installed in a [supported region](vminsights-health-enable.md?current-limitations.md). 4. Verify that the Log Analytics workspace is installed in a [supported region](vminsights-health-enable.md?current-limitations.md).
This error indicates that the **Microsoft.WorkloadMonitor** resource provider wa
### Health shows as "unknown" after guest health is enabled. #### Verify that performance counters on Windows nodes are working correctly
-Guest health relies on the agent being able to collect performance counters from the node. he base set of performance counter libraries may become corrupted and may need to be rebuilt. Follow the instructions at [Manually rebuild performance counter library values](/troubleshoot/windows-server/performance/rebuild-performance-counter-library-values) to rebuild the performance counters.
+Guest health relies on the agent being able to collect performance counters from the node. the base set of performance counter libraries may become corrupted and may need to be rebuilt. Follow the instructions at [Manually rebuild performance counter library values](/troubleshoot/windows-server/performance/rebuild-performance-counter-library-values) to rebuild the performance counters.
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-mount-options.md
na Previously updated : 06/01/2021 Last updated : 05/05/2022 # Linux NFS mount options best practices for Azure NetApp Files
For details, see [Linux concurrency best practices for Azure NetApp Files](perfo
## `Rsize` and `Wsize`
+Examples in this section provide information about how to approach performance tuning. You might need to make adjustments to suit your specific application needs.
+ The `rsize` and `wsize` flags set the maximum transfer size of an NFS operation. If `rsize` or `wsize` are not specified on mount, the client and server negotiate the largest size supported by the two. Currently, both Azure NetApp Files and modern Linux distributions support read and write sizes as large as 1,048,576 Bytes (1 MiB). However, for best overall throughput and latency, Azure NetApp Files recommends setting both `rsize` and `wsize` no larger than 262,144 Bytes (256 K). You might observe that both increased latency and decreased throughput when using `rsize` and `wsize` larger than 256 KiB. For example, [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server](../virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md#mount-the-azure-netapp-files-volumes) shows the 256-KiB `rsize` and `wsize` maximum as follows:
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> [!NOTE] > A web site must have a globally unique URL. When you create a web site that uses a hosting plan, the URL is `http://<app-name>.azurewebsites.net`. The app name must be globally unique. When you create a web site that uses an App Service Environment, the app name must be unique within the [domain for the App Service Environment](../../app-service/environment/using-an-ase.md#app-access). For both cases, the URL of the site is globally unique. >
-> Azure Functions has the same naming rules and restrictions as Microsoft.Web/sites. However, prior to version 4.x of Azure Functions Core Tools, the function name was truncated to 32 characters when generating the host ID. For version 4.x, this limit no longer applies. For earlier versions, limit the function name to 32 characters to avoid naming collisions.
+> Azure Functions has the same naming rules and restrictions as Microsoft.Web/sites. When generating the host ID, the function app name is truncated to 32 characters. This can cause host ID collision when a shared storage account is used. For more information, see [Host ID considerations](../../azure-functions/storage-considerations.md#host-id-considerations).
## Next steps
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
+
+ Title: Create an Azure Video Indexer account
+description: This topic explains how to create an account for Azure Video Indexer.
++ Last updated : 05/03/2022++
+# Get started with Azure Video Indexer in Azure portal
+
+This Quickstart walks you through the steps to get started with Azure Video Indexer. You will create an Azure Video Indexer account and its accompanying resources by using the Azure portal.
+
+To start using Azure Video Indexer, you will need to create an Azure Video Indexer account. The account needs to be associated with a [Media Services][docs-ms] resource and a [User-assigned managed identity][docs-uami]. The managed identity will need to have Contributor permissions role on the Media Services.
+
+## Prerequisites
+
+### Azure level
+
+* This user should be a member of your Azure subscription with either an **Owner** role, or both **Contributor** and **User Access Administrator** roles. A user can be added twice, with two roles. Once with Contributor and once with user Access Administrator. For more information, see [View the access a user has to Azure resources](../role-based-access-control/check-access.md).
+* Register the **EventGrid** resource provider using the Azure portal.
+
+ In the [Azure portal](https://ms.portal.azure.com), go to **Subscriptions**->[<*subscription*>]->**ResourceProviders**.
+Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the "Registered" state, click **Register**. It takes a couple of minutes to register.
+
+### Azure Video Indexer
+
+* Owner<sup>*</sup> role assignment on the Subscription level.
+
+ * Owner* role assignment on the related Azure Media Services (AMS)
+ * Owner* role assignment on the related Managed Identity
+
+<sup>*</sub>Or both **Contributor** and **User Access Administrator** roles
+
+## Azure portal
+
+### Create an Azure Video Indexer account in the Azure portal
+
+1. Sign into the [Azure portal](https://portal.azure.com/).
+1. Using the search bar at the top, enter **"Azure Video Indexer"**.
+1. Click on *Azure Video Indexer* under *Services*.
+
+ ![Image of search bar](media/create-account-portal/search-bar.png)
+
+1. Click **Create**.
+1. In the **Create an Azure Video Indexer resource** section enter required values. Here are the definitions:
+
+ <!--![Image of create account](media/create-account-portal/create-account-blade.png)-->
+ | Name | Description|
+ |||
+ |**Subscription**|Choose the subscription that you are using to create the Azure Video Indexer account.|
+ |**Resource Group**|Choose a resource group where you are creating the Azure Video Indexer account, or select **Create new** to create a resource group.|
+ |**Azure Video Indexer account**|Select *Create a new account* option.|
+ |**Resource name**|Enter the name of the new Azure Video Indexer account, the name can contain letters, numbers and dashes with no spaces.|
+ |**Region**|Select the geographic region that will be used to deploy the Azure Video Indexer account. The location matches the **resource group location** you chose, if you'd like to change the selected location change the selected resource group or create a new one in the preferred location. [Azure region in which Azure Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all)|
+ |**Media Services account name**|Select a Media Services that the new Azure Video Indexer account will use to process the videos. You can select an existing Media Services or you can create a new one. The Media Services must be in the same location you selected.|
+ |**User-assigned managed identity**|Select a user-assigned managed identity that the new Azure Video Indexer account will use to access the Media Services. You can select an existing user-assigned managed identity or you can create a new one. The user-assignment managed identity will be assigned the role of Contributor role on the Media Services.|
+1. Click **Review + create** at the bottom of the form.
+
+### Review deployed resource
+
+You can use the Azure portal to validate the Azure Video Indexer account and other resources that were created. After the deployment is finished, select **Go to resource** to see your new Azure Video Indexer account.
+
+### Overview
+
+![Image of overview](media/create-account-portal/overview.png)
+
+Click on *Explore Azure Video Indexer's portal* to view your new account on the [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
+
+### Management API
+
+![Image of Generate-access-token](media/create-account-portal/generate-access-token.png)
+
+Use the *Management API* tab to manually generate access tokens for the account.
+This token can be used to authenticate API calls for this account. Each token is valid for one hour.
+
+Choose the following:
+* Permission type: **Contributor** or **Reader**
+* Scope: **Account**, **Project** or **Video**
+ * For **Project** or **Video** you should also insert the matching ID
+* Click **Generate**
+++
+### Next steps
+
+Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ApiUsage/ArmBased).
++
+<!-- links -->
+[docs-uami]: ../active-directory/managed-identities-azure-resources/overview.md
+[docs-ms]: /azure/media-services/latest/media-services-overview
+[docs-role-contributor]: ../../role-based-access-control/built-in-roles.md#contibutor
+[docs-contributor-on-ms]: ./add-contributor-role-on-the-media-service.md
azure-video-indexer Create Video Analyzer For Media Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-video-analyzer-for-media-account.md
- Title: Create an Azure Video Indexer account
-description: This topic explains how to create an account for Azure Video Indexer.
-- Previously updated : 10/13/2021---
-# Get started with Azure Video Indexer in Azure portal
-
-This Quickstart walks you through the steps to get started with Azure Video Indexer. You will create an Azure Video Indexer account and its accompanying resources by using the Azure portal.
-
-To start using Azure Video Indexer, you will need to create an Azure Video Indexer account. The account needs to be associated with a [Media Services][docs-ms] resource and a [User-assigned managed identity][docs-uami]. The managed identity will need to have Contributor permissions role on the Media Services.
-
-## Prerequisites
-> [!NOTE]
-> You'll need an Azure subscription where you have access to both the Contributor role and the User Access Administrator role to the resource group under which you will create new resources, and Contributor role on both Azure Media Services and the User-assigned managed identity. If you don't have the right permissions, ask your account administrator to grant you those permissions. The associated Azure Media Services must be in the same region as the Azure Video Indexer account.
--
-## Azure portal
-
-### Create an Azure Video Indexer account in the Azure portal
-
-1. Sign into the [Azure portal](https://portal.azure.com/).
-1. Using the search bar at the top, enter **"Azure Video Indexer"**.
-1. Click on *Azure Video Indexer* under *Services*.
-
- ![Image of search bar](media/create-video-analyzer-for-media-account/search-bar1.png)
-
-1. Click **Create**.
-1. In the **Create an Azure Video Indexer resource** section enter required values.
-
- ![Image of create account](media/create-video-analyzer-for-media-account/create-account-blade.png)
--
-| Name | Description |
-| ||
-|**Subscription**|Choose the subscription that you are using to create the Azure Video Indexer account.|
-|**Resource Group**|Choose a resource group where you are creating the Azure Video Indexer account, or select **Create new** to create a resource group.|
-|**Azure Video Indexer account**|Select *Create a new account* option.|
-|**Resource name**|Enter the name of the new Azure Video Indexer account, the name can contain letters, numbers and dashes with no spaces.|
-|**Location**|Select the geographic region that will be used to deploy the Azure Video Indexer account. The location matches the **resource group location** you chose, if you'd like to change the selected location change the selected resource group or create a new one in the preferred location. [Azure region in which Azure Video Indexer is available](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services&regions=all)|
-|**Media Services account name**|Select a Media Services that the new Azure Video Indexer account will use to process the videos. You can select an existing Media Services or you can create a new one. The Media Services must be in the same location you selected.|
-|**User-assigned managed identity**|Select a user-assigned managed identity that the new Azure Video Indexer account will use to access the Media Services. You can select an existing user-assigned managed identity or you can create a new one. The user-assignment managed identity will be assigned the role of Contributor role on the Media Services.|
-
-1. Click **Review + create** at the bottom of the form.
-
-### Review deployed resource
-
-You can use the Azure portal to validate the Azure Video Indexer account and other resources that were created. After the deployment is finished, select **Go to resource** to see your new Azure Video Indexer account.
-
-### Overview
-
-![Image of overview](media/create-video-analyzer-for-media-account/overview-screenshot.png)
-
-Click on *Explore Azure Video Indexer's portal* to view your new account on the [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
-
-### Management API
-
-![Image of Generate-access-token](media/create-video-analyzer-for-media-account/generate-access-token.png)
-
-Use the *Management API* tab to manually generate access tokens for the account.
-This token can be used to authenticate API calls for this account. Each token is valid for one hour.
-
-Choose the following:
-* Permission type: **Contributor** or **Reader**
-* Scope: **Account**, **Project** or **Video**
- * For **Project** or **Video** you should also insert the matching ID
-* Click **Generate**
---
-### Next steps
-
-Learn how to [Upload a video using C#](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ApiUsage/ArmBased).
--
-<!-- links -->
-[docs-uami]: ../active-directory/managed-identities-azure-resources/overview.md
-[docs-ms]: /azure/media-services/latest/media-services-overview
-[docs-role-contributor]: ../../role-based-access-control/built-in-roles.md#contibutor
-[docs-contributor-on-ms]: ./add-contributor-role-on-the-media-service.md
azure-vmware Configure Hcx Network Extension High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-hcx-network-extension-high-availability.md
+
+ Title: Configure HCX network extension high availability
+description: Learn how to configure HCX network extension high availability
+ Last updated : 05/06/2022++
+# HCX Network extension high availability (HA)
+
+VMware HCX is an application mobility platform that's designed to simplify application migration, workload rebalancing, and business continuity across data centers and clouds.
+
+The HCX Network Extension service provides layer 2 connectivity between sites. Network Extension HA protects extended networks from a Network Extension appliance failure at either the source or remote site.
+
+HCX 4.3.0 or later allows network extension high availability. Network Extension HA operates in Active/Standby mode. In this article, you'll learn how to configure HCX network extension High Availability on Azure private cloud.
+
+## Prerequisites
+
+The Network Extension High Availability (HA) setup requires four Network Extension appliances, with two appliances at the source site and two appliances at the remote site. Together, these two pairs form the HA Group, which is the mechanism for managing Network Extension High Availability. Appliances on the same site require a similar configuration and must have access to the same set of resources.
+
+- Network Extension HA requires an HCX Enterprise license.
+- In the HCX Compute Profile, the Network Extension Appliance Limit is set to allow for the number of Network Extension appliances. The Azure VMware Solutions Limit is automatically set to unlimited.
+- In the HCX Service Mesh, the Network Extension Appliance Scale Out Appliance Count is set to provide enough appliances to support network extension objectives, including any Network Extension HA groups.
+
+When you create a service mesh, set the appliance count to a minimum of two. For an existing service mesh, you can edit and adjust the appliance count to provide the required appliance count.
+
+- The Network Extension appliances selected for HA activation must have no networks extended over them.
+- Only Network Extension appliances upgraded to HCX 4.3.0 or later can be added to HA Groups.
+- Learn more about the [Network Extension High Availability](https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-guide/GUID-E1353511-697A-44B0-82A0-852DB55F97D7.html?msclkid=1fcacda4c4dd11ecae41f8715a8d8ded) feature, prerequisites, considerations and limitations.
+
+## Activate high availability (HA)
+
+Use the following steps to activate HA, create HA groups, and view the HA roles and options available.
+
+1. Sign in to HCX Manager UI in one of two ways:
+ 1. cloudadmin@vsphere.local.
+ 1. HCX UI through vCenter HCX Plugin.
+1. Navigate to **Infrastructure**, then **Interconnect**.
+1. Select **Service Mesh**, then select **View Appliances**.
+1. Select **Appliances** from the **Interconnect** tab options.
+ 1. Check the network appliance that you want to make highly available and select **Activate High Availability**.
+
+ :::image type="content" source="media/hcx/appliances-activate-high-availability.png" alt-text="screenshot of the appliances tab with a list of appliances you can choose from to activate high availability."lightbox="media/hcx/appliances-activate-high-availability.png":::
+
+1. Confirm by selecting **Activate HA**.
+ 1. Activating HA initiates the process to create an HA group. The process automatically selects an HA partner from the available NE Appliances.
+1. After the HA group is created, the **HA Roles** for the local and remote appliances display **Active** and **Standby**.
+
+ :::image type="content" source="media/hcx/high-availability-group-active-standby-roles.png" alt-text="screenshot of the active and standby roles that are assigned to the local and remote appliances."lightbox="media/hcx/high-availability-group-active-standby-roles.png":::
+
+1. Select **HA Management** from the **Interconnect** tab options to view the HA group details and the available options: **Manual failover, Deactivate, Redeploy, and Force Sync**.
+
+ :::image type="content" source="media/hcx/high-availability-management-group-details-available-options.png" alt-text="screenshot of the high availability management tab with high availability group details and available options."lightbox="media/hcx/high-availability-management-group-details-available-options.png":::
+
+## Extend network using network HA group
+
+1. Locate **Services** in the left navigation and select **Network Extension**.
+1. Select **Create a Network Extension**.
+1. Choose the Network you want and select **Next**.
+1. In **mandatory fields**, provide the gateway IP address in CIDR format, select the HA group under **Extension Appliances** (this was created in the previous step), and select **Submit** to extend the Network.
+1. After the network is extended, under **Extension Appliance**, you can see the extension details and HA group.
+
+ :::image type="content" source="media/hcx/extend-network-details-high-availability-group.png" alt-text="screenshot of the extension appliance details and high availability group."lightbox="media/hcx/extend-network-details-high-availability-group.png":::
+
+1. To migrate virtual machines (VMs), navigate to **Services** and select **Migration**.
+ 1. Select **Migrate** from the **Migration** window to start the workload mobility wizard.
+1. In **Workload Mobility**, add and replace details as needed, then select **Validate**.
+1. After validation completes, select **Go** to start the migration using Extended Network.
+
+ :::image type="content" source="media/hcx/extend-network-migrate-process.png" alt-text="screenshot of the workload mobility page to edit details, validate them, and migrate using extended network."lightbox="media/hcx/extend-network-migrate-process.png":::
+
+## Next steps
+
+ Now that you've learned how to configure and extend HCX network extension high availability (HA), use the following resource to learn more about how to manage HCX network extension HA.
+
+[Managing Network Extension High Availability](https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-guide/GUID-4A745694-5E32-4E87-92D2-AC1191170412.html)
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
Azure file shares backup is available in all regions, **except** for Germany Cen
| Setting | Limit | | | - | | Maximum number of restores per day | 10 |
-| Maximum number of files per restore, in case of ILR (Item level recovery) | 99 |
+| Maximum number of individual files or folders per restore, in case of ILR (Item level recovery) | 99 |
| Maximum recommended restore size per restore for large file shares | 15 TiB | ## Retention limits
backup Backup Azure Troubleshoot Vm Backup Fails Snapshot Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md
Title: Troubleshoot Agent and extension issues description: Symptoms, causes, and resolutions of Azure Backup failures related to agent, extension, and disks. Previously updated : 12/01/2021 Last updated : 05/05/2022
To resolve this issue, remove the lock on the resource group of the VM, and retr
**Error code**: UserErrorKeyvaultPermissionsNotConfigured <br> **Error message**: Backup doesn't have sufficient permissions to the key vault for backup of encrypted VMs. <br>
-For a backup operation to succeed on encrypted VMs, it must have permissions to access the key vault. Permissions can be set through the [Azure portal](./backup-azure-vms-encryption.md) or through [PowerShell](./backup-azure-vms-automation.md#enable-protection).
+For a backup operation to succeed on encrypted VMs, it must have permissions to access the key vault. Permissions can be set through the [Azure portal](./backup-azure-vms-encryption.md)/ [PowerShell](./backup-azure-vms-automation.md#enable-protection)/ [CLI](./quick-backup-vm-cli.md#prerequisites-to-backup-encrypted-vms).
>[!Note] >If the required permissions to access the key vault have already been set, retry the operation after a little while.
backup Backup Azure Vms Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-encryption.md
Title: Back up and restore encrypted Azure VMs description: Describes how to back up and restore encrypted Azure VMs with the Azure Backup service. Previously updated : 07/27/2021 Last updated : 05/05/2022+++ # Back up and restore encrypted Azure virtual machines
To set permissions:
1. Select **Save** to provide Azure Backup with the permissions.
+You can also set the access policy using [PowerShell](./backup-azure-vms-automation.md#enable-protection) or [CLI](./quick-backup-vm-cli.md#prerequisites-to-backup-encrypted-vms).
+ ## Next steps [Restore encrypted Azure virtual machines](restore-azure-encrypted-virtual-machines.md)
backup Backup Sql Server Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-azure-troubleshoot.md
If you'd like to trigger a restore on the healthy SQL instances, do the followin
| Error message | Possible causes | Recommended action | ||||
-| Operation cancelled as a conflicting operation was already running on the same database. | The following are the cases where this error code might surface:<br><ul><li>Adding or dropping files to a database while a backup is happening.</li><li>Shrinking files while database backups are happening.</li><li>A database backup by another backup product configured for the database is in progress and a backup job is triggered by Azure Backup extension.</li></ul>| Disable the other backup product to resolve the issue.
+| Operation cancelled as a conflicting operation was already running on the same database. | You may get this error when the triggered on-demand, or the scheduled backup job has conflicts with an already running backup operation triggered by Azure Backup extension on the same database.<br> The following are the scenarios when this error code might display:<br><ul><li>Full backup is running on the database and another Full backup is triggered.</li><li>Diff backup is running on the database and another Diff backup is triggered.</li><li>Log backup is running on the database and another Log backup is triggered.</li></ul>| After the conflicting operation fails, restart the operation.
### UserErrorFileManipulationIsNotAllowedDuringBackup | Error message | Possible causes | Recommended actions | ||||
-| Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. | You may get this error when the triggered on-demand, or the scheduled backup job has conflicts with an already running backup operation triggered by Azure Backup extension on the same database.<br> The following are the scenarios when this error code might display:<br><ul><li>Full backup is running on the database and another Full backup is triggered.</li><li>Diff backup is running on the database and another Diff backup is triggered.</li><li>Log backup is running on the database and another Log backup is triggered.</li></ul>| After the conflicting operation fails, restart the operation.
+| Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. | The following are the cases where this error code might surface:<br><ul><li>Adding or dropping files to a database while a backup is happening.</li><li>Shrinking files while database backups are happening.</li><li>A database backup by another backup product configured for the database is in progress and a backup job is triggered by Azure Backup extension.</li></ul>| Disable the other backup product to resolve the issue.
### UserErrorSQLPODoesNotExist
backup Quick Backup Vm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-cli.md
Title: Quickstart - Back up a VM with Azure CLI
description: In this Quickstart, learn how to create a Recovery Services vault, enable protection on a VM, and create the initial recovery point with Azure CLI. ms.devlang: azurecli Previously updated : 01/31/2019 Last updated : 05/05/2022 +++ # Back up a virtual machine in Azure with the Azure CLI
az backup protection enable-for-vm \
> [!IMPORTANT] > While using CLI to enable backup for multiple VMs at once, ensure that a single policy doesn't have more than 100 VMs associated with it. This is a [recommended best practice](./backup-azure-vm-backup-faq.yml#is-there-a-limit-on-number-of-vms-that-can-be-associated-with-the-same-backup-policy). Currently, the PowerShell client doesn't explicitly block if there are more than 100 VMs, but the check is planned to be added in the future.
+### Prerequisites to backup encrypted VMs
+
+To enable protection on encrypted VMs (encrypted using BEK and KEK), you must provide the Azure Backup service permission to read keys and secrets from the key vault. To do so, set a *keyvault* access policy with the required permissions, as demonstrated below:
+
+```azurecli-interactive
+# Enter the name of the resource group where the key vault is located on this variable
+AZ_KEYVAULT_RGROUP=TestKeyVaultRG
+
+# Enter the name of the key vault on this variable
+AZ_KEYVAULT_NAME=TestKeyVault
+
+# Get the object id for the Backup Management Service on your subscription
+AZ_ABM_OBJECT_ID=$( az ad sp list --display-name "Backup Management Service" --query '[].objectId' -o tsv --only-show-errors )
+
+# This command will grant the permissions required by the Backup Management Service to access the key vault
+az keyvault set-policy --key-permissions get list backup --secret-permissions get list backup \
+ --resource-group $AZ_KEYVAULT_RGROUP --name $AZ_KEYVAULT_NAME --object-id $AZ_ABM_OBJECT_ID
+```
+ ## Start a backup job To start a backup now rather than wait for the default policy to run the job at the scheduled time, use [az backup protection backup-now](/cli/azure/backup/protection#az-backup-protection-backup-now). This first backup job creates a full recovery point. Each backup job after this initial backup creates incremental recovery points. Incremental recovery points are storage and time-efficient, as they only transfer changes made since the last backup.
backup Restore Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql.md
This article explains how to restore a database to an Azure PostgreSQL server backed up by Azure Backup.
-You can restore a database to any Azure PostgreSQL server within the same subscription, if the service has the appropriate [set of permissions](backup-azure-database-postgresql-overview.md#azure-backup-authentication-with-the-postgresql-server) on the target server.
+You can restore a database to any Azure PostgreSQL server of a different/same subscription but within the same region of the vault, if the service has the appropriate [set of permissions](backup-azure-database-postgresql-overview.md#azure-backup-authentication-with-the-postgresql-server) on the target server.
## Restore Azure PostgreSQL database
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
Previously updated : 04/13/2022 Last updated : 05/05/2022
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
* A [virtual network](../virtual-network/quick-create-portal.md). This will be the VNet to which you deploy Bastion. * A virtual machine in the virtual network. This VM isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later in this tutorial via Bastion. If you don't have a VM, create one using [Quickstart: Create a VM](../virtual-machines/windows/quick-create-portal.md).
-* The following required roles for your resources.
-
- * Required VM roles:
-
+* **Required VM roles:**
* Reader role on the virtual machine. * Reader role on the NIC with private IP of the virtual machine.
- * Required inbound ports:
-
+* **Required inbound ports:**
* For Windows VMS - RDP (3389) * For Linux VMs - SSH (22)
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
1. Go to your VNet.
-1. Click **Bastion** in the left pane to open the **Bastion** page.
+1. Select **Bastion** in the left pane to open the **Bastion** page.
-1. On the Bastion page, click **Configure manually**. This lets you configure specific additional settings before deploying Bastion to your VNet.
+1. On the Bastion page, select **Configure manually**. This lets you configure specific additional settings before deploying Bastion to your VNet.
:::image type="content" source="./media/tutorial-create-host-portal/configure-manually.png" alt-text="Screenshot of Bastion page showing configure manually button." lightbox="./media/tutorial-create-host-portal/configure-manually.png"::: 1. On the **Create a Bastion** page, configure the settings for your bastion host. Project details are populated from your virtual network values. Configure the **Instance details** values.
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
:::image type="content" source="./media/tutorial-create-host-portal/instance-values.png" alt-text="Screenshot of Bastion page instance values." lightbox="./media/tutorial-create-host-portal/instance-values.png":::
-1. Configure the **virtual networks** settings. Select the VNet from the dropdown. If you don't see your VNet in the dropdown list, make sure you selected the correct Resource Group and Region in the previous settings on this page.
+1. Configure the **virtual networks** settings. Select the VNet from the dropdown. If you don't see your VNet in the dropdown list, make sure you selected the correct Region in the previous settings on this page.
-1. To configure the AzureBastionSubnet, click **Manage subnet configuration**.
+1. To configure the AzureBastionSubnet, select **Manage subnet configuration**.
:::image type="content" source="./media/tutorial-create-host-portal/select-vnet.png" alt-text="Screenshot of configure virtual networks section." lightbox="./media/tutorial-create-host-portal/select-vnet.png":::
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
* The subnet name must be **AzureBastionSubnet**. * The subnet must be at least **/26 or larger** (/26, /25, /24 etc.) to accommodate features available with the Standard SKU.
- Click **Save** at the bottom of the page to save your values.
+ Select **Save** at the bottom of the page to save your values.
-1. At the top of the **Subnets** page, click **Create a Bastion** to return to the Bastion configuration page.
+1. At the top of the **Subnets** page, select **Create a Bastion** to return to the Bastion configuration page.
:::image type="content" source="./media/tutorial-create-host-portal/create-a-bastion.png" alt-text="Screenshot of Create a Bastion."lightbox="./media/tutorial-create-host-portal/create-a-bastion.png":::
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
1. When you finish specifying the settings, select **Review + Create**. This validates the values.
-1. Once validation passes, you can deploy Bastion. Click **Create**. You'll see a message letting you know that your deployment is process. Status will display on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
+1. Once validation passes, you can deploy Bastion. Select **Create**. You'll see a message letting you know that your deployment is process. Status will display on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
## <a name="connect"></a>Connect to a VM
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
For more information, see [Get speech recognition results](get-speech-recognitio
## Get partial results
-Consider when to start displaying captions, and how many words to show at a time. Speech recognition results are subject to change while an utterance is still being recognized. Partial partial results are returned with each `Recognizing` event. As each word is processed, the Speech service re-evaluates an utterance in the new context and again returns the best result. The new result isn't guaranteed to be the same as the previous result. The complete and final transcription of an utterance is returned with the `Recognized` event.
+Consider when to start displaying captions, and how many words to show at a time. Speech recognition results are subject to change while an utterance is still being recognized. Partial results are returned with each `Recognizing` event. As each word is processed, the Speech service re-evaluates an utterance in the new context and again returns the best result. The new result isn't guaranteed to be the same as the previous result. The complete and final transcription of an utterance is returned with the `Recognized` event.
> [!NOTE] > Punctuation of partial results is not available.
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
axios({
}, params: { 'api-version': '3.0',
- 'from': 'en',
'to': ['de', 'it'] }, data: [{
cognitive-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-overview.md
keywords: translator, text translation, machine translation, translation service
# What is Azure Cognitive Services Translator?
-Azure Cognitive Services Translator is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Translator can be used with any operating system and powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
+Translator Service is a cloud-based neural machine translation service that is part of the [Azure Cognitive Services](../what-are-cognitive-services.md) family of REST APIs. Translator can be used with any operating system and powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you'll learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
Translator documentation contains the following article types:
communication-services Matching Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/matching-concepts.md
let worker = await client.registerWorker({
::: zone-end
+> [!NOTE]
+> If a worker is registered and idle for more than 7 days, it'll be automatically deregistered and you'll receive a `WorkerDeregistered` event from EventGrid.
+ ## Job Submission In the following example, we'll submit a job that
Once a match is made, an offer is created. The distribution policy that is attac
The [OfferIssued Event][offer_issued_event] includes details about the job, worker, how long the offer is valid and the `offerId` which you'll need to accept or decline the job.
+> [!NOTE]
+> The maximum lifetime of a job is 90 days, after which it'll automatically expire.
+ <!-- LINKS --> [subscribe_events]: ../../how-tos/router-sdk/subscribe-events.md [job_classified_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassified
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
Previously updated : 04/11/2022 Last updated : 05/02/2022 # Observability in Azure Container Apps Preview
-Azure Container Apps provides built-in observability features that give you a holistic view of the behavior, performance, and health of your running container apps.
+Azure Container Apps provides several built-in observability features that give you a holistic view of your container appΓÇÖs health throughout its application lifecycle. These features help you monitor and diagnosis the state of your app to improve performance and respond to critical problems.
These features include:
+- Log streaming
+- Container console
- Azure Monitor metrics - Azure Monitor Log Analytics-- Azure Monitor Alerts
+- Azure Monitor alerts
>[!NOTE] > While not a built-in feature, [Azure Monitor's Application Insights](../azure-monitor/app/app-insights-overview.md) is a powerful tool to monitor your web and background applications. > Although Container Apps doesn't support the Application Insights auto-instrumentation agent, you can instrument your application code using Application Insights SDKs.
+## Log streaming
+
+While developing and troubleshooting your container app, you often want to see a container's logs in real-time. Container Apps lets you view a stream of your container's `stdout` and `stderr` log messages using the Azure portal or the Azure CLI.
+
+### View log streams from the Azure portal
+
+Go to your container app page in the Azure portal. Select **Log stream** under the **Monitoring** section on the sidebar menu. For apps with more than one container, choose a container from the drop-down lists. When there are multiple revisions and replicas, first choose from the **Revision**, **Replica**, and then the **Container** drop-down lists.
+
+After you select the container, you can view the log stream in the viewing pane. You can stop the log stream and clear the log messages from the viewing pane. To save the log messages, you can copy and paste them into the editor of your choice.
++
+### Show log streams from Azure CLI
+
+Show a container's application logs from the Azure CLI with the `az containerapp logs show` command. You can view previous log entries using the `--tail` argument. To view a live stream, use the `--follow` argument. Select Ctrl-C to stop the live stream.
+
+For example, you can list the last 50 container log entries in a container app with a single revision, replica, and container using the following command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp logs show \
+ --name album-api \
+ --resource-group album-api-rg \
+ --tail 50
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp logs show `
+ --name album-api `
+ --resource-group album-api-rg `
+ --tail 50
+```
+++
+You can view a log stream from a container in a container app with multiple revisions, replicas, and containers by adding the `--revision`, `--replica`, `--container` arguments to the `az containerapp show` command.
+
+Use the `az containerapp revision list` command to get the revision, replica, and container names to use in the `az containerapp logs show` command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp revision list \
+ --name album-api \
+ --resource-group album-api-rg \
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision list `
+ --name album-api `
+ --resource-group album-api-rg
+```
+++
+Show the streaming container logs:
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp logs show --name album-api \
+ --resource-group album-api-rg \
+ --revision album-api--v2 \
+ --replica album-api--v2-5fdd5b4ff5-6mblw \
+ --container album-api-container \
+ --follow
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp logs show --name album-api `
+ --resource-group album-api-rg `
+ --revision album-api--v2 `
+ --replica album-api--v2-5fdd5b4ff5-6mblw `
+ --container album-api-container `
+ --follow
+```
+++
+## Container console
+
+Connecting to a container's console is useful when you want to troubleshoot and modify something inside a container. Azure Container Apps lets you connect to a container's console using the Azure portal or the Azure CLI.
+
+### Connect to a container console via the Azure portal
+
+Select **Console** in the **Monitoring** menu group from your container app page in the Azure portal. When your app has more than one container, choose a container from the drop-down list. When there are multiple revisions and replicas, first choose from the **Revision**, **Replica**, and then the **Container** drop-down lists.
+
+You can choose to access your console via bash, sh, or a custom executable. If you choose a custom executable, it must be available in the container.
++
+### Connect to a container console via the Azure CLI
+
+Use the `az containerapp exec` command to connect to a container console. Select Ctrl-D to exit the console.
+
+For example, you can connect to a container console in a container app with a single revision, replica, and container using the following command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp exec
+ --name album-api \
+ --resource-group album-api-rg \
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp exec
+ --name album-api `
+ --resource-group album-api-rg `
+```
+++
+To connect to a container console in a container app with multiple revisions, replicas, and containers include the `--revision`, `--replica`, and `--container` arguments in the `az containerapp exec` command.
+
+Use the `az containerapp revision list` command to get the revision, replica and container names to use in the `az containerapp exec` command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp revision list \
+ --name album-api \
+ --resource-group album-api-rg \
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp revision list `
+ --name album-api `
+ --resource-group album-api-rg `
+```
+++
+Connect to the container console.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp exec --name album-api \
+ --resource-group album-api-rg \
+ --revision album-api--v2 \
+ --replica album-api--v2-5fdd5b4ff5-6mblw \
+ --container album-api-container
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az containerapp exec --name album-api `
+ --resource-group album-api-rg `
+ --revision album-api--v2 `
+ --replica album-api--v2-5fdd5b4ff5-6mblw `
+ --container album-api-container
+```
+++ ## Azure Monitor metrics
-The Azure Monitor metrics feature allows you to monitor your app's compute and network usage. These metrics are available to view and analyze through the [metrics explorer in the Azure portal](../azure-monitor/essentials/metrics-getting-started.md). Metric data is also available through the [Azure CLI](/cli/azure/monitor/metrics), and Azure [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
+Azure Monitor collects metric data from your container app at regular intervals. These metrics help you gain insights into the performance and health of your container app. You can use metrics explorer in the Azure portal to monitor and analyze the metric data. You can also retrieve metric data through the [Azure CLI](/cli/azure/monitor/metrics) and Azure [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
### Available metrics for Container Apps
-Container Apps provides the following metrics for your container app.
+Container Apps provides these metrics.
|Title | Description | Metric ID |Unit | |||||
Container Apps provides the following metrics for your container app.
The metrics namespace is `microsoft.app/containerapps`.
-### View a snapshot of your app's metrics
+### View a current snapshot of your app's metrics
-Using the Azure portal, navigate to your container apps **Overview** page. The **Monitoring** section displays the current CPU, memory, and network utilization for your container app.
+On your container app **Overview** page in the Azure portal, select the **Monitoring** tab to display charts showing your container app's current CPU, memory, and network utilization.
:::image type="content" source="media/observability/metrics-in-overview-page.png" alt-text="Screenshot of the Monitoring section in the container app overview page.":::
-From this view, you can pin one or more charts to your dashboard. When you select a chart, it's opened in the metrics explorer.
+From this view, you can pin one or more charts to your dashboard or select a chart to open it in the metrics explorer.
-### View and analyze metric data with metrics explorer
+### View metrics with metrics explorer
-The Azure Monitor metrics explorer is available from the Azure portal, through the **Metrics** menu option in your container app page or the Azure **Monitor**->**Metrics** page.
+The Azure Monitor metrics explorer lets you create charts from metric data to help you analyze your container app's resource and network usage over time. You can pin charts to a dashboard or in a shared workbook.
-The metrics page allows you to create and view charts to display your container apps metrics. Refer to [Getting started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md) to learn more.
+Open the metrics explorer in the Azure portal by selecting **Metrics** from the sidebar menu on your container app page. To learn more about metrics explorer, go to [Getting started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md).
-When you first navigate to the metrics explorer, you'll see the main page. From here, select the metric that you want to display. You can add more metrics to the chart by selecting **Add Metric** in the upper left.
+Create a chart by selecting a **Metric**. You can modify the chart by changing aggregation, adding more metrics, changing time ranges and intervals, adding filters, and applying splitting.
:::image type="content" source="media/observability/metrics-main-page.png" alt-text="Screenshot of the metrics explorer from the container app resource page.":::
-You can filter your metrics by revision or replica. For example, to filter by a replica, select **Add filter**, then select a replica from the *Value* drop-down. You can also filter by your container app's revision.
+You can filter your metrics by revision or replica. For example, to filter by a replica, select **Add filter** and select a replica from the **Value** drop-down list.
:::image type="content" source="media/observability/metrics-add-filter.png" alt-text="Screenshot of the metrics explorer showing the chart filter options.":::
-You can split the information in your chart by revision or replica. For example, to split by revision, select **Applying splitting** and select **Revision** as the value. Splitting is only available when the chart contains a single metric.
+You can split the information in your chart by revision or replica. For example, to split by revision, select **Apply splitting** and select **Revision** from the **Values** drop-down list. Splitting is only available when the chart contains a single metric.
:::image type="content" source="media/observability/metrics-apply-splitting.png" alt-text="Screenshot of the metrics explorer that shows a chart with metrics split by revision.":::
-You can view metrics across multiple container apps to view resource utilization over your entire application.
+You can add more scopes to view metrics across multiple container apps.
:::image type="content" source="media/observability/metrics-across-apps.png" alt-text="Screenshot of the metrics explorer that shows a chart with metrics for multiple container apps."::: ## Azure Monitor Log Analytics
-Application logs are available through Azure Monitor Log Analytics. Each Container Apps environment includes a Log Analytics workspace, which provides a common log space for all container apps in the environment.
+Azure Monitor collects application logs and stores them in a Log Analytics workspace. Each Container Apps environment includes a Log Analytics workspace that provides a common place to store the application log data from all containers running in the environment.
+
+Application logs consist of messages written to each container's `stdout` and `stderr`. Additionally, if your container app is using Dapr, log entries from the Dapr sidecar are also collected.
-Application logs, consisting of the logs written to `stdout` and `stderr` from the container(s) in each container app, are collected and stored in the Log Analytics workspace. Additionally, if your container app is using Dapr, log entries from the Dapr sidecar are also collected.
+Azure Monitor stores Container Apps log data in the `ContainerAppConsoleLogs_CL` table. Create queries using this table to view your container app log data.
-To view these logs, you create Log Analytics queries. The log entries are stored in the ContainerAppConsoleLogs_CL table in the CustomLogs group.
+You can create and run queries using Log Analytics in the Azure portal or run queries using Azure CLI commands.
-The most commonly used Container Apps specific columns in ContainerAppConsoleLogs_CL:
+The most used columns in ContainerAppConsoleLogs_CL include:
-|Column |Type |Description |
-||||
-|ContainerAppName_s | string | container app name |
-|ContainerGroupName_g| string |replica name|
-|ContainerId|string|container identifier|
-|ContainerImage_s |string| container image name |
-|EnvironmentName_s|string|Container Apps environment name|
-|Log_s |string| log message|
-|RevisionName_s|string|revision name|
+|Column |Description |
+|||
+| `ContainerAppName_s` | container app name |
+| `ContainerGroupName_g` | replica name |
+| `ContainerId` | container identifier |
+| `ContainerImage_s` | container image name |
+| `EnvironmentName_s` | Container Apps environment name |
+| `Message` | log message |
+| `RevisionName_s` | revision name |
-You can run Log Analytic queries via the Azure portal, the Azure CLI or PowerShell.
+### Use Log Analytics to query logs
-### Log Analytics via the Azure portal
+Log Analytics is a tool in the Azure portal that you can use to view and analyze log data. Using Log Analytics, you can write simple or advanced queries and then sort, filter, and visualize the results in charts to spot trends and identify issues. You can work interactively with the query results or use them with other features such as alerts, dashboards, and workbooks.
-In the Azure portal, logs are available from either the **Monitor**->**Logs** page or by navigating to your container app and selecting the **Logs** menu item. From Log Analytics interface, you can query the logs based on the **CustomLogs>ContainerAppConsoleLogs_CL** table.
+Start Log Analytics from **Logs** in the sidebar menu on your container app page. You can also start Log Analytics from **Monitor>Logs**.
+
+You can query the logs using the columns listed in the **CustomLogs > ContainerAppConsoleLogs_CL** table in the **Tables** tab.
:::image type="content" source="media/observability/log-analytics-query-page.png" alt-text="Screenshot of the Log Analytics query editor.":::
-Here's an example of a simple query, that displays log entries for the container app named *album-api*.
+Below is a simple query that displays log entries for the container app named *album-api*.
```kusto ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api'
-| project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s
+| project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Message
| take 100 ```
-For more information regarding the Log Analytics interface and log queries, see the [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md).
+For more information regarding Log Analytics and log queries, see the [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md).
-### Log Analytics via the Azure CLI and PowerShell
+### Query logs via the Azure CLI and PowerShell
-Application logs can be queried from the [Azure CLI](/cli/azure/monitor/metrics) and [PowerShell cmdlets](/powershell/module/az.monitor/get-azmetric).
+Container Apps logs can be queried using the [Azure CLI](/cli/azure/monitor/log-analytics).
-Example Azure CLI query to display the log entries for a container app:
+Here's an example Azure CLI query to output a table containing five log records with the container app name "album-api". The table columns are specified by the parameters after the project operator. The $WORKSPACE_CUSTOMER_ID variable contains the GUID of the Log Analytics workspace.
```azurecli
-az monitor log-analytics query --workspace --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Message, LogLevel_s | take 100" --out table
+az monitor log-analytics query --workspace $WORKSPACE_CUSTOMER_ID --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Message, LogLevel_s | take 5" --out table
```
-For more information, see [Viewing Logs](monitor.md#viewing-logs).
+For more information about using Azure CLI to view container app logs, see [Viewing Logs](monitor.md#viewing-logs).
## Azure Monitor alerts
-You can configure alerts to send notifications based on metrics values and Log Analytics queries.  In the Azure portal, you can add alerts from the metrics explorer and the Log Analytics interface.
+Azure Monitor alerts notify you so that you can respond quickly to critical issues. There are two types of alerts that you can define:
+
+- [metric alerts](../azure-monitor/alerts/alerts-metric-overview.md) based on metric data
+- [log alerts](../azure-monitor/alerts/alerts-unified-log.md) based on log data
-In the metrics explorer and the Log Analytics interface, alerts are based on existing charts and queries. You can manage your alerts from the **Monitor>Alerts** page. From this page, you can create metric and log alerts without existing metric charts or log queries. To learn more about alerts, refer to [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
+You can create alert rules from metric charts in the metric explorer and from queries in Log Analytics. You can also define and manage alerts from the **Monitor>Alerts** page.
+
+To learn more about alerts, refer to [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
-### Setting alerts in the metrics explorer
+### Create metric alerts in metrics explorer
-Metric alerts monitor metric data at set intervals and trigger when an alert rule condition is met. For more information, see [Metric alerts](../azure-monitor/alerts/alerts-metric-overview.md).
+When you add alert rules to a metric chart in the metrics explorer, alerts are triggered when the collected metric data matches alert rule conditions.
-in the metrics explorer, you can create metric alerts based on Container Apps metrics. Once you create a metric chart, you're able to create alert rules based on the chart's settings. You can create an alert rule by selecting **New alert rule**.
+After creating a [metric chart](#view-metrics-with-metrics-explorer), select **New alert rule** to create an alert rule based on the chart's settings. The new alert rule will include your chart's target resource, metric, splitting and filter dimensions.
:::image type="content" source="media/observability/metrics-alert-new-alert-rule.png" alt-text="Screenshot of the metrics explorer highlighting the new rule button.":::
-When you create a new alert rule, the rule creation pane is opened to the **Condition** tab. An alert condition is started for you based on the metric that you selected for the chart. You then edit the condition to configure threshold and other settings.
+When you select **New alert rule**, the rule creation pane opens to the **Condition** tab. Metrics explorer automatically creates an alert condition containing the chart's metric settings. Select the alert condition to add the threshold criteria to complete the condition.
:::image type="content" source="media/observability/metrics-alert-create-condition.png" alt-text="Screenshot of the metric explorer alert rule editor. A condition is automatically created based on the chart settings.":::
-You can add more conditions to your alert rule by selecting the **Add condition** option in the **Create an alert rule** pane.
+Add more conditions to your alert rule by selecting the **Add condition** option in the **Create an alert rule** pane.
:::image type="content" source="media/observability/metrics-alert-add-condition.png" alt-text="Screenshot of the metric explorer alert rule editor highlighting the Add condition button.":::
-When you add an alert condition, the **Select a signal** pane is opened. This pane lists the Container Apps metrics from which you can select for the condition.
+When adding a new alert condition, select from the metrics listed in the **Select a signal** pane.
:::image type="content" source="media/observability/metrics-alert-select-a-signal.png" alt-text="Screenshot of the metric explorer alert rule editor showing the Select a signal pane.":::
-After you've selected the metric, you can configure the settings for your alert condition. For more information about configuring alerts, see [Manage metric alerts](../azure-monitor/alerts/alerts-metric.md).
+After selecting the metric, you can configure the settings for your alert condition. For more information about configuring alerts, see [Manage metric alerts](../azure-monitor/alerts/alerts-metric.md).
-You can add alert splitting to the condition so you can receive individual alerts for specific revisions or replicas.
+You can receive individual alerts for specific revisions or replicas by enabling alert splitting and selecting **Revision** or **Replica** from the **Dimension name** list.
-Example of setting a dimension for a condition:
+Example of selecting a dimension to split an alert.
:::image type="content" source="media/observability/metrics-alert-split-by-dimension.png" alt-text="Screenshot of the metrics explorer alert rule editor. This example shows the Split by dimensions options in the Configure signal logic pane.":::
-Once you create the alert rule, it's a resource in your resource group. To manage your alert rules, navigate to **Monitor>Alerts**.
- To learn more about configuring alerts, visit [Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md)
-### Setting alerts using Log Analytics queries
+### Create log alert rules in Log Analytics
-You can use Log Analytics queries to periodically monitor logs and trigger alerts based on the results. The Log Analytics interface allows you to add alert rules to your queries. Once you have created and run a query, you're able to create an alert rule.
+Use Log Analytics to create alert rules from a log query. When you create an alert rule from a query, the query is run at set intervals triggering alerts when the log data matches the alert rule conditions. To learn more about creating log alert rules, see [Manage log alerts](../azure-monitor/alerts/alerts-log.md).
+
+To create an alert rule, you first create and run the query to validate it. Then, select **New alert rule**.
:::image type="content" source="media/observability/log-alert-new-alert-rule.png" alt-text="Screenshot of the Log Analytics interface highlighting the new alert rule button.":::
-Selecting **New alert rule** opens the **Create an alert rule** editor, where you can configure the setting for your alerts.
+The **Create an alert rule** editor is opened to the **Condition** tab, which is populated with your log query. Configure the settings in the **Measurement** and **Alert logic** sections to complete the alert rule.
:::image type="content" source="media/observability/log-alerts-rule-editor.png" alt-text="Screenshot of the Log Analytics alert rule editor.":::
-To learn more about creating a log alert, see [Manage log alerts](../azure-monitor/alerts/alerts-log.md)
-
-Enabling splitting will send individual alerts for each dimension you define. Container Apps supports the following alert splitting dimensions:
+Optionally, you can enable alert splitting in the alert rule to send individual alerts for each dimension you select in the **Split by dimensions** section of the editor. The dimensions for Container Apps are:
- app name - revision
Enabling splitting will send individual alerts for each dimension you define. C
:::image type="content" source="media/observability/log-alerts-splitting.png" alt-text="Screenshot of the Log Analytics alert rule editor showing the Split by dimensions options":::
-To learn more about log alerts, refer to [Log alerts in Azure Monitor](../azure-monitor/alerts/alerts-unified-log.md).
+## Observability throughout the application lifecycle
+
+With the Container Apps observability features, you can monitor your app throughout the development-to-production lifecycle. The following sections describe the most useful monitoring features for each phase.
+
+### Development and test phase
+
+During the development and test phase, real-time access to your containers' application logs and console is critical for debugging issues. Container Apps provides:
+
+- [log streaming](#log-streaming) for real-time monitoring
+- [container console](#container-console) access to debug your application
+
+### Deployment phase
+
+Once you deploy your container app, it's essential to monitor your app. Continuous monitoring helps you quickly identify problems that may occur around error rates, performance, or metrics.
+
+Azure Monitor features give you the ability to track your app with the following features:
+
+- [Azure Monitor Metrics](#azure-monitor-metrics): monitor and analyze key metrics
+- [Azure Monitor Alerts](#azure-monitor-alerts): send alerts for critical conditions
+- [Azure Monitor Log Analytics](#azure-monitor-log-analytics): view and analyze application logs
+
+### Maintenance phase
+
+Container Apps manages updates to your container app by creating [revisions](revisions.md). You can run multiple revisions concurrently to perform A/B testing or for blue green deployments. These observability features will help you monitor your app across revisions:
-> [!TIP]
-> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+- [Azure Monitor Metrics](#azure-monitor-metrics): monitor and compare key metrics for multiple revisions
+- [Azure Monitor Alerts](#azure-monitor-alerts): send alerts individual alerts per revision
+- [Azure Monitor Log Analytics](#azure-monitor-log-analytics): view, analyze and compare log data for multiple revisions
## Next steps
-> [!div class="nextstepaction"]
-> [Health probes in Azure Container Apps](health-probes.md)
-> [Monitor an App in Azure Container Apps](monitor.md)
+- [Monitor an app in Azure Container Apps Preview](monitor.md)
+- [Health probes in Azure Container Apps](health-probes.md)
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cmk.md
description: Learn how to configure customer-managed keys for your Azure Cosmos
Previously updated : 04/29/2022 Last updated : 05/05/2022 ms.devlang: azurecli
Because a system-assigned managed identity can only be retrieved after the creat
### To use a user-assigned managed identity
-> [!IMPORTANT]
-> When using a user-assigned managed identity, firewall rules on the Azure Key Vault account aren't currently supported. You must keep your Azure Key Vault account accessible from all networks.
- 1. When creating the new access policy in your Azure Key Vault account as described [above](#add-access-policy), use the `Object ID` of the managed identity you wish to use instead of Azure Cosmos DB's first-party identity. 1. When creating your Azure Cosmos DB account, you must enable the user-assigned managed identity and specify that you want to use this identity when accessing your encryption keys in Azure Key Vault. You can do this:
cosmos-db Set Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/set-throughput.md
All containers created inside a database with provisioned throughput must be cre
If the workload on a logical partition consumes more than the throughput that's allocated to a specific logical partition, your operations are rate-limited. When rate-limiting occurs, you can either increase the throughput for the entire database or retry the operations. For more information on partitioning, see [Logical partitions](partitioning-overview.md).
-Containers in a shared throughput database share the throughput (RU/s) allocated to that database. With standard (manual) provisioned throughput, you can have up to 25 containers with a minimum of 400 RU/s on the database. With autoscale provisioned throughput, you can have up to 25 containers in a database with autoscale max 4000 RU/s (scales between 400 - 4000 RU/s).
+Containers in a shared throughput database share the throughput (RU/s) allocated to that database. With standard (manual) provisioned throughput, you can have up to 25 containers with a minimum of 400 RU/s on the database. With autoscale provisioned throughput, you can have up to 25 containers in a database with autoscale minimum 1000 RU/s (scales between 100 - 1000 RU/s).
> [!NOTE] > In February 2020, we introduced a change that allows you to have a maximum of 25 containers in a shared throughput database, which better enables throughput sharing across the containers. After the first 25 containers, you can add more containers to the database only if they are [provisioned with dedicated throughput](#set-throughput-on-a-database-and-a-container), which is separate from the shared throughput of the database.<br>
This table shows a comparison between provisioning standard (manual) throughput
|**Parameter** |**Standard (manual) throughput on a database** |**Standard (manual) throughput on a container**|**Autoscale throughput on a database** | **Autoscale throughput on a container**| ||||||
-|Entry point (minimum RU/s) |400 RU/s. Can have up to 25 containers with no RU/s minimum per container.</li> |400| Autoscale between 400 - 4000 RU/s. Can have up to 25 containers with no RU/s minimum per container.</li> | Autoscale between 400 - 4000 RU/s.|
-|Minimum RU/s per container|--|400|--|Autoscale between 400 - 4000 RU/s|
+|Entry point (minimum RU/s) |400 RU/s. Can have up to 25 containers with no RU/s minimum per container.</li> |400| Autoscale between 100 - 1000 RU/s. Can have up to 25 containers with no RU/s minimum per container.</li> | Autoscale between 100 - 1000 RU/s.|
+|Minimum RU/s per container|--|400|--|Autoscale between 100 - 1000 RU/s|
|Maximum RUs|Unlimited, on the database.|Unlimited, on the container.|Unlimited, on the database.|Unlimited, on the container. |RUs assigned or available to a specific container|No guarantees. RUs assigned to a given container depend on the properties. Properties can be the choice of partition keys of containers that share the throughput, the distribution of the workload, and the number of containers. |All the RUs configured on the container are exclusively reserved for the container.|No guarantees. RUs assigned to a given container depend on the properties. Properties can be the choice of partition keys of containers that share the throughput, the distribution of the workload, and the number of containers. |All the RUs configured on the container are exclusively reserved for the container.| |Maximum storage for a container|Unlimited.|Unlimited|Unlimited|Unlimited|
cosmos-db Conceptual Resilient Sdk Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/conceptual-resilient-sdk-applications.md
Title: Designing resilient applications with Azure Cosmos DB SDKs
description: Learn how to build resilient applications using the Azure Cosmos DB SDKs and what all are the expected error status codes to retry on. Previously updated : 03/25/2022 Last updated : 05/05/2022
Your application should be resilient to a [certain degree](#when-to-contact-cust
The short answer is **yes**. But not all errors make sense to retry on, some of the error or status codes aren't transient. The table below describes them in detail:
-| Status Code | Should add retry | Description |
+| Status Code | Should add retry | SDKs retry | Description |
|-|-|-|
-| 400 | No | [Bad request](troubleshoot-bad-request.md) |
-| 401 | No | [Not authorized](troubleshoot-unauthorized.md) |
-| 403 | Optional | [Forbidden](troubleshoot-forbidden.md) |
-| 404 | No | [Resource is not found](troubleshoot-not-found.md) |
-| 408 | Yes | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) |
-| 409 | No | Conflict failure is when the identity (ID and partition key) provided for a resource on a write operation has been taken by an existing resource or when a [unique key constraint](../unique-keys.md) has been violated. |
-| 410 | Yes | Gone exceptions (transient failure that shouldn't violate SLA) |
-| 412 | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an [optimistic concurrency](database-transactions-optimistic-concurrency.md#optimistic-concurrency-control) error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
-| 413 | No | [Request Entity Too Large](../concepts-limits.md#per-item-limits) |
-| 429 | Yes | It's safe to retry on a 429. Review the [guide to troubleshoot HTTP 429](troubleshoot-request-rate-too-large.md).|
-| 449 | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Cosmos DB. |
-| 500 | No | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
-| 503 | Yes | [Service unavailable](troubleshoot-service-unavailable.md) |
-
-In the table above, all the status codes marked with **Yes** should have some degree of retry coverage in your application.
+| 400 | No | No | [Bad request](troubleshoot-bad-request.md) |
+| 401 | No | No | [Not authorized](troubleshoot-unauthorized.md) |
+| 403 | Optional | No | [Forbidden](troubleshoot-forbidden.md) |
+| 404 | No | No | [Resource is not found](troubleshoot-not-found.md) |
+| 408 | Yes | Yes | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) |
+| 409 | No | No | Conflict failure is when the identity (ID and partition key) provided for a resource on a write operation has been taken by an existing resource or when a [unique key constraint](../unique-keys.md) has been violated. |
+| 410 | Yes | Yes | Gone exceptions (transient failure that shouldn't violate SLA) |
+| 412 | No | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an [optimistic concurrency](database-transactions-optimistic-concurrency.md#optimistic-concurrency-control) error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
+| 413 | No | No | [Request Entity Too Large](../concepts-limits.md#per-item-limits) |
+| 429 | Yes | Yes | It's safe to retry on a 429. Review the [guide to troubleshoot HTTP 429](troubleshoot-request-rate-too-large.md).|
+| 449 | Yes | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Cosmos DB. |
+| 500 | No | No | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
+| 503 | Yes | Yes | [Service unavailable](troubleshoot-service-unavailable.md) |
+
+In the table above, all the status codes marked with **Yes** on the second column should have some degree of retry coverage in your application.
### HTTP 403
Because of the nature of timeouts and connectivity failures, these might not app
It's recommended for applications to have their own retry policy for these scenarios and take into consideration how to resolve write timeouts. For example, retrying on a Create timeout can yield an HTTP 409 (Conflict) if the previous request did reach the service, but it would succeed if it didn't.
+### Language specific implementation details
+
+For further implementation details regarding a language see:
+
+* [.NET SDK implementation information](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/docs/)
+* [Java SDK implementation information](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos/docs/)
+ ## Do retries affect my latency? From the client perspective, any retries will affect the end to end latency of an operation. When your application P99 latency is being affected, understanding the retries that are happening and how to address them is important.
cost-management-billing Billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-subscription-transfer.md
tags: billing,top-support-issue
Previously updated : 04/27/2022 Last updated : 05/04/2022 # Transfer billing ownership of an MOSP Azure subscription to another account
-This article shows the steps needed to transfer billing ownership of an (MOSP) Microsoft Online Services Program, also referred to as pay-as-you-go, Azure subscription to another account. Before you transfer billing ownership for a subscription, read [Azure subscription and reservation transfer hub](subscription-transfer.md) to ensure that your transfer type is supported.
+This article shows the steps needed to transfer billing ownership of an (MOSP) Microsoft Online Services Program, also referred to as pay-as-you-go, Azure subscription to another MOSP account.
+
+Before you transfer billing ownership for a subscription, read [Azure subscription and reservation transfer hub](subscription-transfer.md) to ensure that your transfer type is supported.
If you want to keep your billing ownership but change subscription type, see [Switch your Azure subscription to another offer](switch-azure-offer.md). To control who can access resources in the subscription, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 03/29/2022 Last updated : 05/05/2022
A user must have an Owner role on an Enrollment Account to create a subscription
* The Enterprise Administrator of your enrollment can [make you an Account Owner](https://ea.azure.com/helpdocs/addNewAccount) (sign in required) which makes you an Owner of the Enrollment Account. * An existing Owner of the Enrollment Account can [grant you access](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
-To use a service principal (SPN) to create an EA subscription, an Owner of the Enrollment Account must [grant that service principal the ability to create subscriptions](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put). When using an SPN to create subscriptions, use the ObjectId of the Azure AD Application Registration as the Service Principal ObjectId using [Azure Active Directory PowerShell](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true ) or [Azure CLI](/cli/azure/ad/sp#az-ad-sp-list). For more information about the EA role assignment API request, see [Assign roles to Azure Enterprise Agreement service principal names](assign-roles-azure-service-principals.md). The article includes a list of roles (and role definition IDs) that can be assigned to an SPN.
+To use a service principal (SPN) to create an EA subscription, an Owner of the Enrollment Account must [grant that service principal the ability to create subscriptions](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
+
+When using an SPN to create subscriptions, use the ObjectId of the Azure AD Application Registration as the Service Principal ObjectId using [Azure Active Directory PowerShell](/powershell/module/azuread/get-azureadserviceprincipal?view=azureadps-2.0&preserve-view=true ) or [Azure CLI](/cli/azure/ad/sp#az-ad-sp-list). You can also use the steps at [Find your SPN and tenant ID](assign-roles-azure-service-principals.md#find-your-spn-and-tenant-id) to find the object ID in the Azure portal for an existing SPN.
+
+For more information about the EA role assignment API request, see [Assign roles to Azure Enterprise Agreement service principal names](assign-roles-azure-service-principals.md). The article includes a list of roles (and role definition IDs) that can be assigned to an SPN.
> [!NOTE] > - Ensure that you use the correct API version to give the enrollment account owner permissions. For this article and for the APIs documented in it, use the [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put) API.
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 04/21/2022 Last updated : 05/04/2022
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| MPA | MPA | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. | | MOSP (PAYG) | MOSP (PAYG) | <ul><li> If you're changing the billing owner of the subscription, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md). <li> Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. | | MOSP (PAYG) | MCA - individual | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. |
-| MOSP (PAYG) | EA | For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea). |
+| MOSP (PAYG) | EA | <ul><li>If you're transferring the subscription to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea). <li> If you're changing billing ownership, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
| MOSP (PAYG) | MCA - Enterprise | <ul><li> For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <li> Self-service reservation transfers are supported. | ## Perform resource transfers
cost-management-billing Understand Reserved Instance Usage Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reserved-instance-usage-ea.md
tags: billing
Previously updated : 03/22/2022 Last updated : 05/05/2022
Information in the following table about metric and filter can help solve for co
| **Reservation purchases** | Replace {metric} with ActualCost<br><br>Replace {filter} with: properties/ChargeType%20eq%20'Purchase' | | **Refunds** | Replace {metric} with ActualCost<br><br>Replace {filter} with: properties/ChargeType%20eq%20'Refund' |
-## Download the usage CSV file with new data
+## Download the EA usage CSV file with new data
If you're an EA admin, you can download the CSV file that contains new usage data from Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the usage file from Azure portal (portal.azure.com) to see the new data.
In the Azure portal, navigate to [Cost management + billing](https://portal.azur
![Example showing where to Download the CSV usage data file in the Azure portal](./media/understand-reserved-instance-usage-ea/portal-download-csv.png) 4. In **Download Usage + Charges** , under **Usage Details Version 2** , select **All Charges (usage and purchases)** and then click download. Repeat for **Amortized charges (usage and purchases)**.
+## Download usage for your Microsoft Customer Agreement
+
+To view and download usage data for a billing profile, you must be a billing profile Owner, Contributor, Reader, or Invoice manager.
+
+### Download usage for billed charges
+
+1. Search for **Cost Management + Billing**.
+2. Select a billing profile.
+3. Select **Invoices**.
+4. In the invoice grid, find the row of the invoice corresponding to the usage you want to download.
+5. Click on the ellipsis (`...`) at the end of the row.
+6. In the download context menu, select **Azure usage and charges**.
## Common cost and usage tasks
data-factory Data Flow Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-errors.md
Previously updated : 01/21/2022 Last updated : 04/29/2022 # Common error codes and messages
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Xml-InvalidReferenceResource - **Message**: Reference resource in xml data file cannot be resolved. - **Cause**: The reference resource in the XML data file cannot be resolved.-- **Recommendation**: You should check the reference resource in the XML data file.
+- **Recommendation**: Check the reference resource in the XML data file.
## Error code: DF-Xml-InvalidSchema - **Message**: Schema validation failed.
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Cosmos-PartitionKeyMissed - **Message**: Partition key path should be specified for update and delete operations.-- **Cause**: The partition key path is missed in the Azure Cosmos DB sink.-- **Recommendation**: Use the providing partition key in the Azure Cosmos DB sink settings.
+- **Cause**: The partition key path is missing in the Azure Cosmos DB sink.
+- **Recommendation**: Provide the partition key in the Azure Cosmos DB sink settings.
## Error code: DF-Cosmos-InvalidPartitionKey - **Message**: Partition key path cannot be empty for update and delete operations.
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Cosmos-IdPropertyMissed - **Message**: 'id' property should be mapped for delete and update operations. - **Cause**: The `id` property is missed for update and delete operations.-- **Recommendation**: Make sure that the input data has an `id` column in Cosmos DB sink settings. If no, use **select or derive transformation** to generate this column before sink.
+- **Recommendation**: Make sure that the input data has an `id` column in Azure Cosmos DB sink transformation settings. If not, use a select or derived column transformation to generate this column before the sink transformation.
## Error code: DF-Cosmos-InvalidPartitionKeyContent - **Message**: partition key should start with /.
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Executor-OutOfMemorySparkBroadcastError - **Message**: Explicitly broadcasted dataset using left/right option should be small enough to fit in node's memory. You can choose broadcast option 'Off' in join/exists/lookup transformation to avoid this issue or use an integration runtime with higher memory.-- **Cause**: The size of the broadcasted table far exceeds the limitation of the node memory.
+- **Cause**: The size of the broadcasted table far exceeds the limits of the node memory.
- **Recommendation**: The broadcast left/right option should be used only for smaller dataset size which can fit into node's memory, so make sure to configure the node size appropriately or turn off the broadcast option. ## Error code: DF-MSSQL-InvalidFirewallSetting
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Cosmos-FailToResetThroughput - **Message**: Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, please retry after sometime.-- **Cause**: The throughput scale operation of the Cosmos DB cannot be performed because another scale operation is in progress.-- **Recommendation**: Please log in your Cosmos account, and manually change its container's throughput to be auto scale or add custom activities after data flows to reset the throughput.
+- **Cause**: The throughput scale operation of the Azure Cosmos DB cannot be performed because another scale operation is in progress.
+- **Recommendation**: Login to Azure Cosmos DB account, and manually change container throughput to be auto scale or add a custom activity after mapping data flows to reset the throughput.
## Error code: DF-Executor-InvalidPath - **Message**: Path does not resolve to any file(s). Please make sure the file/folder exists and is not hidden.
This article lists common error codes and messages reported by mapping data flow
- **Recommendation**: Please update AdobeIntegration settings while only privacy 'GDPR' is supported. ## Error code: DF-Executor-RemoteRPCClientDisassociated-- **Message**: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues.-- **Cause**: Data flow activity runs failed because of the transient network issue or because one node in spark cluster runs out of memory.
+- **Message**: Job aborted due to stage failure. Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues.
+- **Cause**: Data flow activity run failed because of transient network issues or one node in spark cluster ran out of memory.
- **Recommendation**: Use the following options to solve this problem: - Option-1: Use a powerful cluster (both drive and executor nodes have enough memory to handle big data) to run data flow pipelines with setting "Compute type" to "Memory optimized". The settings are shown in the picture below.
This article lists common error codes and messages reported by mapping data flow
- **Recommendation**: Check the format and change it to the proper one. ## Error code: DF-Synapse-InvalidTableDBName
+- **Message**: The table/database name is not a valid name for tables/databases. Valid names only contain alphabet characters, numbers and _.
- **Cause**: The table/database name is not valid. - **Recommendation**: Change a valid name for the table/database. Valid names only contain alphabet characters, numbers and `_`. ## Error code: DF-Synapse-InvalidOperation - **Cause**: The operation is not supported.-- **Recommendation**: Change the invalid operation.
+- **Recommendation**: Change **Update method** configuration as delete, update and upsert are not supported in Workspace DB.
## Error code: DF-Synapse-DBNotExist - **Cause**: The database does not exist.
This article lists common error codes and messages reported by mapping data flow
## Error code: DF-Cosmos-ShortTypeNotSupport - **Message**: Short data type is not supported in Cosmos DB. - **Cause**: The short data type is not supported in the Azure Cosmos DB.-- **Recommendation**: Add a derived transformation to convert related columns from short to integer before using them in the Cosmos sink.
+- **Recommendation**: Add a derived column transformation to convert related columns from short to integer before using them in the Azure Cosmos DB sink transformation.
## Error code: DF-Blob-FunctionNotSupport - **Message**: This endpoint does not support BlobStorageEvents, SoftDelete or AutomaticSnapshot. Please disable these account features if you would like to use this endpoint.
This article lists common error codes and messages reported by mapping data flow
- **Cause**: There is no enough permission to read/write Azure Cosmos DB data. - **Recommendation**: Please use the read-write key to access Azure Cosmos DB.
+## Error code: DF-Cosmos-ResourceNotFound
+- **Message**: Resource not found.
+- **Cause**: Invalid configuration is provided (for example, the partition key with invalid characters) or the resource does not exist.
+- **Recommendation**: To solve this issue, refer to [Diagnose and troubleshoot Azure Cosmos DB not found exceptions](../cosmos-db/troubleshoot-not-found.md).
+
+## Error code: DF-Snowflake-IncompatibleDataType
+- **Message**: Expression type does not match column data type, expecting VARIANT but got VARCHAR.
+- **Cause**: The column(s) type of input data which is string is different from the related column(s) type in the Snowflake sink transformation which is VARIANT.
+- **Recommendation**: For the snowflake VARIANT, it can only accept data flow value which is struct, map or array type. If the value of your input data column(s) is JSON or XML or other string, use a parse transformation before the Snowflake sink transformation to covert value into struct, map or array type.
+
+## Error code: DF-JSON-WrongDocumentForm
+- **Message**: Malformed records are detected in schema inference. Parse Mode: FAILFAST.
+- **Cause**: Wrong document form is selected to parse JSON file(s).
+- **Recommendation**: Try different **Document form** (**Single document**/**Document per line**/**Array of documents**) in JSON settings. Most cases of parsing errors are caused by wrong configuration.
+
+## Error code: DF-File-InvalidSparkFolder
+- **Message**: Failed to read footer for file
+- **Cause**: Folder *_spark_metadata* is created by the structured streaming job.
+- **Recommendation**: Delete *_spark_metadata* folder if it exists. For more information, refer to this [article](https://forums.databricks.com/questions/12447/javaioioexception-could-not-read-footer-for-file-f.html).
+
+## Error code: DF-Executor-InternalServerError
+- **Message**: Failed to execute dataflow with internal server error, please retry later. If issue persists, please contact Microsoft support for further assistance
+- **Cause**: The data flow execution is failed because of the system error.
+- **Recommendation**: To solve this issue, refer to [Internal server errors](data-flow-troubleshoot-guide.md#internal-server-errors).
+
+## Error code: DF-Executor-InvalidStageConfiguration
+- **Message**: Storage with user assigned managed identity authentication in staging is not supported
+- **Cause**: An exception is happened because of invalid staging configuration.
+- **Recommendation**: The user-assigned managed identity authentication is not supported in staging. Use a different authentication to create an Azure Data Lake Storage Gen2 or Azure Blob Storage linked service, then use it as staging in mapping data flows.
+
+## Error code: DF-GEN2-InvalidStorageAccountConfiguration
+- **Message**: Blob operation is not supported on older storage accounts. Creating a new storage account may fix the issue.
+- **Cause**: The storage account is too old.
+- **Recommendation**: Create a new storage account.
+
+## Error code: DF-AzureDataExplorer-InvalidOperation
+- **Message**: Blob operation is not supported on older storage accounts. Creating a new storage account may fix the issue.
+- **Cause**: Operation is not supported.
+- **Recommendation**: Change **Update method** configuration as delete, update and upsert are not supported in Azure Data Explorer.
+
+## Error code: DF-AzureDataExplorer-WriteTimeout
+- **Message**: Operation timeout while writing data.
+- **Cause**: Operation times out while writing data.
+- **Recommendation**: Increase the value in **Timeout** option in sink transformation settings.
+
+## Error code: DF-AzureDataExplorer-ReadTimeout
+- **Message**: Operation timeout while reading data.
+- **Cause**: Operation times out while reading data.
+- **Recommendation**: Increase the value in **Timeout** option in source transformation settings.
+ ## Next steps For more help with troubleshooting, see these resources:
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
+
+ Title: Extract data from PDF
+description: Learn how to use a solution template to extract data from a PDF source using Azure Data Factory.
+++++++ Last updated : 04/22/2022++
+# Extract data from PDF
++
+This article describes a solution template that you can use to extract data from a PDF source using Azure Data Factory and Form Recognizer.
+
+## About this solution template
+
+This template analyzes data from a PDF URL source using two Azure Form Recognizer calls. Then, it transforms the output to readable tables in a dataflow and outputs the data to a storage sink.
+
+This template contains two activities:
+- **Web Activity** to call Azure Form Recognizer's layout model API
+- **Data flow** to transform extracted data from PDF
+
+This template defines 4 parameters:
+- *FormRecognizerURL* is the Form recognizer URL ("https://{endpoint}/formrecognizer/v2.1/layout/analyze"). Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription. You need to replace the default value with your own URL.
+- *FormRecognizerKey* is the Form Recognizer subscription key. You need to replace the default value with your own subscription key.
+- *PDF_SourceURL* is the URL of your PDF source. You need to replace the default value with your own URL.
+- *outputFolder* is the name of the folder path where you want your files to be in your destination store. You need to replace the default value with your own folder path.
+
+## Prerequisites
+
+* Azure Form Recognizer Resource Endpoint URL and Key (create a new resource [here](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer))
+
+## How to use this solution template
+
+1. Go to template **Extract data from PDF**. Create a **New** connection to your source storage store or choose an existing connection. The source storage store is where you want to copy files from.
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-1.png" alt-text="Screenshot of how to create a new connection or select an existing connection from a drop down menu to the source in template set up.":::
+
+2. Create a **New** connection to your destination storage store or choose an existing connection.
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-2.png" alt-text="Screenshot of how to create a new connection or select existing connection from a drop down menu to Form Recognizer in template set up.":::
+
+3. Select **Use this template**.
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-3.png" alt-text="Screenshot of how to complete the template by clicking use this template at the bottom of the screen.":::
+
+4. You should see the following pipeline:
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-4.png" alt-text="Screenshot of pipeline view with web activity linking to a dataflow activity.":::
+
+5. Select **Debug**.
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-5.png" alt-text="Screenshot of how to Debug pipeline using the debug button on the top banner of the screen.":::
+
+6. Enter parameter values, review results, and publish.
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-6.png" alt-text="Screesnhot of where to enter pipeline debug parameters on a panel to the right.":::
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-7.png" alt-text="Screenshot of the results that return when the pipeline is triggered.":::
+
+## Next steps
+- [What's New in Azure Data Factory](whats-new.md)
+- [Introduction to Azure Data Factory](introduction.md)
+
data-factory Solution Template Pii Detection And Masking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-pii-detection-and-masking.md
+
+ Title: PII detection and masking
+description: Learn how to use a solution template to detect and mask PII data using Azure Data Factory.
+++++++ Last updated : 04/22/2022++
+# PII detection and masking
++
+This article describes a solution template that you can use to detect and mask PII data in your data flow with Azure Cognitive Services.
+
+## About this solution template
+
+This template retrieves a dataset from Azure Data Lake Storage Gen2 source. Then, a request body is created with a derived column and an external call transformation calls Azure Cognitive Services and masks PII before loading to the destination sink.
+
+The template contains one activity:
+- **Data flow** to detect and mask PII data
+
+This template defines 3 parameters:
+- *sourceFileSystem* is the folder path where files are read from the source store. You need to replace the default value with your own folder path.
+- *sourceFilePath* is the subfolder path where files are read from the source store. You need to replace the default value with your own subfolder path.
+- *sourceFileName* is the name of the file that you would like to transform. You need to replace the default value with your own file name.
+
+## Prerequisites
+
+* Azure Cognitive Services Resource Endpoint URL and Key (create a new resource [here](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics))
+
+## How to use this solution template
+
+1. Go to template **PII detection and masking**. Create a **New** connection to your source storage store or choose an existing connection. The source storage store is where you want to read files from.
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-1.png" alt-text="Screenshot of template set up page where you can create a new connection or select an existing connection to the source from a drop down menu.":::
+
+2. Create a **New** connection to your destination storage store or choose an existing connection.
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-2.png" alt-text="Screenshot of template set up page to create a new connection or select an existing connection to Cognitive Services from a drop down menu.":::
+
+3. Select **Use this template**.
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-3.png" alt-text="Screenshot of button in bottom left corner to finish creating pipeline.":::
+
+4. You should see the following pipeline:
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/PII-detection-and-masking-4.png" alt-text="Screenshot of pipeline view with one dataflow activity.":::
+
+5. Clicking into the dataflow activity will show the following dataflow:
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-5.png" alt-text="Screenshot of the dataflow view with a source leading to three transformations and then a sink.":::
+
+6. Turn on **Data flow debug**.
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-6.png" alt-text="Screenshot of the Data flow debug button found in the top banner of the screen.":::
+
+7. Update **Parameters** in **Debug Settings** and **Save**.
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-7.png" alt-text="Screenshot of the Debug settings button on the top banner of the screen to the right of debug button.":::
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-7b.png" alt-text="Screenshot of where to update parameters in Debug settings in a panel on the right side of the screen.":::
+
+8. Preview the results in **Data Preview**.
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-8.png" alt-text="Screenshot of dataflow data preview at the bottom of the screen.":::
+
+9. When data preview results are as expected, update the **Parameters**.
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-9.png" alt-text="Screenshot of dataflow parameters at the bottom of the screen under Parameters.":::
+
+10. Return to pipeline and select **Debug**. Review results and publish.
+
+ :::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-10.png" alt-text="Screenshot of the results that return after the pipeline is triggered.":::
+
+## Next steps
+
+- [What's New in Azure Data Factory](whats-new.md)
+- [Introduction to Azure Data Factory](introduction.md)
+++++
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Azure **Subscription Owners** and **Subscription Contributor**s can onboard, upd
## Onboard a trial subscription
-If you would like to evaluate Defender for IoT, you can use a trial subscription. The trial is valid for 30 days and supports 1000 committed devices. Using the trial lets you deploy one more Defender for IoT sensors on your network. Use the sensors to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. The trial also allows you to download an on-premises management console to view aggregated information generated by sensors.
+If you would like to evaluate Defender for IoT, you can use a trial subscription. The trial is valid for 30 days and supports 1000 committed devices. Using the trial lets you deploy one or more Defender for IoT sensors on your network. Use the sensors to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. The trial also allows you to download an on-premises management console to view aggregated information generated by sensors.
This section describes how to create a trial subscription for a sensor.
dms Resource Custom Roles Sql Db Managed Instance Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance-ads.md
+
+ Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations using ADS"
+
+description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance migrations.
+++++++++ Last updated : 05/02/2022++
+# Custom roles for SQL Server to Azure SQL Managed Instance migrations using ADS
+
+This article explains how to set up a custom role in Azure for Database Migrations. The custom role will only have the permissions necessary to create and run a Database Migration Service with SQL Managed Instance as a target.
+
+The AssignableScopes section of the role definition json string allows you to control where the permissions appear in the **Add Role Assignment** UI in the portal. You'll likely want to define the role at the resource group or even resource level to avoid cluttering the UI with extra roles. This doesn't perform the actual role assignment.
+
+```json
+{
+ "properties": {
+ "roleName": "DmsCustomRoleDemoForMI",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/<storageSubscription>/resourceGroups/<storageAccountRG>",
+ "/subscriptions/<ManagedInstanceSubscription>/resourceGroups/<managedInstanceRG>",
+ "/subscriptions/<DMSSubscription>/resourceGroups/<dmsServiceRG>"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Storage/storageAccounts/read",
+ "Microsoft.Storage/storageAccounts/listkeys/action",
+ "Microsoft.Storage/storageAccounts/blobServices/read",
+ "Microsoft.Storage/storageAccounts/blobServices/write",
+ "Microsoft.Storage/storageAccounts/blobServices/containers/read",
+ "Microsoft.Sql/managedInstances/read",
+ "Microsoft.Sql/managedInstances/write",
+ "Microsoft.Sql/managedInstances/databases/read",
+ "Microsoft.Sql/managedInstances/databases/write",
+ "Microsoft.Sql/managedInstances/databases/delete",
+ "Microsoft.DataMigration/locations/operationResults/read",
+ "Microsoft.DataMigration/locations/operationStatuses/read",
+ "Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read",
+ "Microsoft.DataMigration/databaseMigrations/write",
+ "Microsoft.DataMigration/databaseMigrations/read",
+ "Microsoft.DataMigration/databaseMigrations/delete",
+ "Microsoft.DataMigration/databaseMigrations/cancel/action",
+ "Microsoft.DataMigration/databaseMigrations/cutover/action",
+ "Microsoft.DataMigration/sqlMigrationServices/write",
+ "Microsoft.DataMigration/sqlMigrationServices/delete",
+ "Microsoft.DataMigration/sqlMigrationServices/read",
+ "Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action",
+ "Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action",
+ "Microsoft.DataMigration/sqlMigrationServices/deleteNode/action",
+ "Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action",
+ "Microsoft.DataMigration/sqlMigrationServices/listMigrations/read",
+ "Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure Rest API to create the roles.
+
+For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
+
+## Description of permissions needed to migrate to Azure SQL Managed Instance
+
+| Permission Action | Description |
+| - | --|
+| Microsoft.Storage/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. |
+| Microsoft.Storage/storageAccounts/listkeys/action | Returns the access keys for the specified storage account. |
+| Microsoft.Storage/storageAccounts/blobServices/read | List blob services. |
+| Microsoft.Storage/storageAccounts/blobServices/write | Returns the result of put blob service properties. |
+| Microsoft.Storage/storageAccounts/blobServices/containers/read | Returns list of containers. |
+| Microsoft.Sql/managedInstances/read | Return the list of managed instances or gets the properties for the specified managed instance. |
+| Microsoft.Sql/managedInstances/write | Creates a managed instance with the specified parameters or update the properties or tags for the specified managed instance. |
+| Microsoft.Sql/managedInstances/databases/read | Gets existing managed database. |
+| Microsoft.Sql/managedInstances/databases/write | Creates a new database or updates an existing database. |
+| Microsoft.Sql/managedInstances/databases/delete | Deletes an existing managed database. |
+| Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response. |
+| Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response. |
+| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results. |
+| Microsoft.DataMigration/databaseMigrations/write | Create or Update Database Migration resource. |
+| Microsoft.DataMigration/databaseMigrations/read | Retrieve the Database Migration resource. |
+| Microsoft.DataMigration/databaseMigrations/delete | Delete Database Migration resource. |
+| Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database. |
+| Microsoft.DataMigration/databaseMigrations/cutover/action | Cutover online migration operation for the database. |
+| Microsoft.DataMigration/sqlMigrationServices/write | Create a new or change properties of existing Service |
+| Microsoft.DataMigration/sqlMigrationServices/delete | Delete existing Service. |
+| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Service. |
+| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the List of Authentication Keys. |
+| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate the Authentication Keys. |
+| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | De-register the IR node. |
+| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | Lists the Monitoring Data for all migrations. |
+| Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | Lists the migrations for the user. |
+| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the Monitoring Data. |
+| Microsoft.SqlVirtualMachine/sqlVirtualMachines/read | Retrieve details of SQL virtual machine. |
+| Microsoft.SqlVirtualMachine/sqlVirtualMachines/write | Create a new or change properties of existing SQL virtual machine. |
+
+## Role assignment
+
+To assign a role to users/APP ID, open the Azure portal, perform the following steps:
+
+1. Navigate to the resource, go to **Access Control**, and then scroll to find the custom roles you created.
+
+2. Select the appropriate role, select the User or APP ID, and then save the changes.
+
+ The user or APP ID(s) now appears listed on the **Role assignments** tab.
+
+## Next steps
+
+* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
dms Resource Custom Roles Sql Db Virtual Machine Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-virtual-machine-ads.md
+
+ Title: "Custom roles: Online SQL Server to Azure Virtual Machines migrations with ADS"
+
+description: Learn to use the custom roles for SQL Server to Azure VM's migrations.
+++++++++ Last updated : 05/02/2022++
+# Custom roles for SQL Server to Azure Virtual Machines migrations using ADS
+
+This article explains how to set up a custom role in Azure for Database Migrations. The custom role will only have the permissions necessary to create and run a Database Migration Service with an Azure Virtual Machine as a target.
+
+The AssignableScopes section of the role definition json string allows you to control where the permissions appear in the **Add Role Assignment** UI in the portal. You'll likely want to define the role at the resource group or even resource level to avoid cluttering the UI with extra roles. This doesn't perform the actual role assignment.
+
+```json
+{
+ "properties": {
+ "roleName": "DmsCustomRoleDemoForVM",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/<storageSubscription>/resourceGroups/<storageAccountRG>",
+ "/subscriptions/<ManagedInstanceSubscription>/resourceGroups/<virtualMachineRG>",
+ "/subscriptions/<DMSSubscription>/resourceGroups/<dmsServiceRG>"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Storage/storageAccounts/read",
+ "Microsoft.Storage/storageAccounts/listkeys/action",
+ "Microsoft.Storage/storageAccounts/blobServices/read",
+ "Microsoft.Storage/storageAccounts/blobServices/write",
+ "Microsoft.Storage/storageAccounts/blobServices/containers/read",
+ "Microsoft.SqlVirtualMachine/sqlVirtualMachines/read",
+ "Microsoft.SqlVirtualMachine/sqlVirtualMachines/write",
+ "Microsoft.DataMigration/locations/operationResults/read",
+ "Microsoft.DataMigration/locations/operationStatuses/read",
+ "Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read",
+ "Microsoft.DataMigration/databaseMigrations/write",
+ "Microsoft.DataMigration/databaseMigrations/read",
+ "Microsoft.DataMigration/databaseMigrations/delete",
+ "Microsoft.DataMigration/databaseMigrations/cancel/action",
+ "Microsoft.DataMigration/databaseMigrations/cutover/action",
+ "Microsoft.DataMigration/sqlMigrationServices/write",
+ "Microsoft.DataMigration/sqlMigrationServices/delete",
+ "Microsoft.DataMigration/sqlMigrationServices/read",
+ "Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action",
+ "Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action",
+ "Microsoft.DataMigration/sqlMigrationServices/deleteNode/action",
+ "Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action",
+ "Microsoft.DataMigration/sqlMigrationServices/listMigrations/read",
+ "Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure Rest API to create the roles.
+
+For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
+
+## Description of permissions needed to migrate to a virtual machine
+
+| Permission Action | Description |
+| - | --|
+| Microsoft.Storage/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. |
+| Microsoft.Storage/storageAccounts/listkeys/action | Returns the access keys for the specified storage account. |
+| Microsoft.Storage/storageAccounts/blobServices/read | List blob services. |
+| Microsoft.Storage/storageAccounts/blobServices/write | Returns the result of put blob service properties. |
+| Microsoft.Storage/storageAccounts/blobServices/containers/read | Returns list of containers. |
+| Microsoft.Sql/managedInstances/read | Return the list of managed instances or gets the properties for the specified managed instance. |
+| Microsoft.Sql/managedInstances/write | Creates a managed instance with the specified parameters or update the properties or tags for the specified managed instance. |
+| Microsoft.Sql/managedInstances/databases/read | Gets existing managed database. |
+| Microsoft.Sql/managedInstances/databases/write | Creates a new database or updates an existing database. |
+| Microsoft.Sql/managedInstances/databases/delete | Deletes an existing managed database. |
+| Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response. |
+| Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response. |
+| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results. |
+| Microsoft.DataMigration/databaseMigrations/write | Create or Update Database Migration resource. |
+| Microsoft.DataMigration/databaseMigrations/read | Retrieve the Database Migration resource. |
+| Microsoft.DataMigration/databaseMigrations/delete | Delete Database Migration resource. |
+| Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database. |
+| Microsoft.DataMigration/databaseMigrations/cutover/action | Cutover online migration operation for the database. |
+| Microsoft.DataMigration/sqlMigrationServices/write | Create a new or change properties of existing Service |
+| Microsoft.DataMigration/sqlMigrationServices/delete | Delete existing Service. |
+| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Service. |
+| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the List of Authentication Keys. |
+| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate the Authentication Keys. |
+| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | De-register the IR node. |
+| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | Lists the Monitoring Data for all migrations. |
+| Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | Lists the migrations for the user. |
+| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the Monitoring Data. |
+| Microsoft.SqlVirtualMachine/sqlVirtualMachines/read | Retrieve details of SQL virtual machine. |
+| Microsoft.SqlVirtualMachine/sqlVirtualMachines/write | Create a new or change properties of existing SQL virtual machine. |
+
+## Role assignment
+
+To assign a role to users/APP ID, open the Azure portal, perform the following steps:
+
+1. Navigate to the resource, go to **Access Control**, and then scroll to find the custom roles you created.
+
+2. Select the appropriate role, select the User or APP ID, and then save the changes.
+
+ The user or APP ID(s) now appears listed on the **Role assignments** tab.
+
+## Next steps
+
+* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
+ - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-managed-instance-ads.md)
> [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard. * Create a target [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
+ - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-managed-instance-ads.md)
> [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard. * Create a target [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
To complete this tutorial, you need to:
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription.
+ - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-virtual-machine-ads.md)
> [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard. * Create a target [SQL Server on Azure Virtual Machine](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal).
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
To complete this tutorial, you need to:
- Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share). - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account. - Owner or Contributor role for the Azure subscription.
+ - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-virtual-machine-ads.md)
> [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard. * Create a target [SQL Server on Azure Virtual Machine](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal).
event-hubs Transport Layer Security Audit Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-audit-minimum-version.md
+
+ Title: Use Azure Policy to audit for compliance of minimum TLS version for an Azure Event Hubs namespace
+
+description: Configure Azure Policy to audit compliance of Azure Event Hubs for using a minimum version of Transport Layer Security (TLS).
+++++ Last updated : 04/25/2022+++
+# Use Azure Policy to audit for compliance of minimum TLS version for an Azure Event Hubs namespace (Preview)
+
+If you have a large number of Microsoft Azure Event Hubs namespaces, you may want to perform an audit to make sure that all namespaces are configured for the minimum version of TLS that your organization requires. To audit a set of Event Hubs namespaces for their compliance, use Azure Policy. Azure Policy is a service that you can use to create, assign, and manage policies that apply rules to Azure resources. Azure Policy helps you to keep those resources compliant with your corporate standards and service level agreements. For more information, see [Overview of Azure Policy](../governance/policy/overview.md).
+
+## Create a policy with an audit effect
+
+Azure Policy supports effects that determine what happens when a policy rule is evaluated against a resource. The audit effect creates a warning when a resource is not in compliance, but does not stop the request. For more information about effects, see [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
+
+To create a policy with an audit effect for the minimum TLS version with the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the Azure Policy service.
+2. Under the **Authoring** section, select **Definitions**.
+3. Select **Add policy definition** to create a new policy definition.
+4. For the **Definition location** field, select the **More** button to specify where the audit policy resource is located.
+5. Specify a name for the policy. You can optionally specify a description and category.
+6. Under **Policy rule** , add the following policy definition to the **policyRule** section.
+
+ ```json
+ {
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.EventHub/namespaces"
+ },
+ {
+ "not": {
+ "field": " Microsoft.EventHub/namespaces/minimumTlsVersion",
+ "equals": "1.2"
+ }
+ }
+ ]
+ },
+ "then": {
+ "effect": "audit"
+ }
+ }
+ }
+ ```
+
+7. Save the policy.
+
+### Assign the policy
+
+Next, assign the policy to a resource. The scope of the policy corresponds to that resource and any resources beneath it. For more information on policy assignment, see [Azure Policy assignment structure](../governance/policy/concepts/assignment-structure.md).
+
+To assign the policy with the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the Azure Policy service.
+2. Under the **Authoring** section, select **Assignments**.
+3. Select **Assign policy** to create a new policy assignment.
+4. For the **Scope** field, select the scope of the policy assignment.
+5. For the **Policy definition** field, select the **More** button, then select the policy you defined in the previous section from the list.
+6. Provide a name for the policy assignment. The description is optional.
+7. Leave **Policy enforcement** set to _Enabled_. This setting has no effect on the audit policy.
+8. Select **Review + create** to create the assignment.
+
+### View compliance report
+
+After you have assigned the policy, you can view the compliance report. The compliance report for an audit policy provides information on which Event Hubs namespaces are not in compliance with the policy. For more information, see [Get policy compliance data](../governance/policy/how-to/get-compliance-data.md).
+
+It may take several minutes for the compliance report to become available after the policy assignment is created.
+
+To view the compliance report in the Azure portal, follow these steps:
+
+1. In the Azure portal, navigate to the Azure Policy service.
+2. Select **Compliance**.
+3. Filter the results for the name of the policy assignment that you created in the previous step. The report shows how many resources are not in compliance with the policy.
+4. You can drill down into the report for additional details, including a list of Event Hubs namespaces that are not in compliance.
+
+## Use Azure Policy to enforce the minimum TLS version
+
+Azure Policy supports cloud governance by ensuring that Azure resources adhere to requirements and standards. To enforce a minimum TLS version requirement for the Event Hubs namespaces in your organization, you can create a policy that prevents the creation of a new Event Hubs namespace that sets the minimum TLS requirement to an older version of TLS than that which is dictated by the policy. This policy will also prevent all configuration changes to an existing namespace if the minimum TLS version setting for that namespace is not compliant with the policy.
+
+The enforcement policy uses the deny effect to prevent a request that would create or modify an Event Hubs namespace so that the minimum TLS version no longer adheres to your organization's standards. For more information about effects, see [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
+
+To create a policy with a deny effect for a minimum TLS version that is less than TLS 1.2, provide the following JSON in the **policyRule** section of the policy definition:
+
+```json
+{
+ "policyRule": {
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": " Microsoft.EventHub/namespaces"
+ },
+ {
+ "not": {
+ "field": " Microsoft.EventHub/namespaces/minimumTlsVersion",
+ "equals": "1.2"
+ }
+ }
+ ]
+ },
+ "then": {
+ "effect": "deny"
+ }
+ }
+}
+```
+
+After you create the policy with the deny effect and assign it to a scope, a user cannot create an Event Hubs namespace with a minimum TLS version that is older than 1.2. Nor can a user make any configuration changes to an existing Event Hubs namespace that currently requires a minimum TLS version that is older than 1.2. Attempting to do so results in an error. The required minimum TLS version for the Event Hubs namespace must be set to 1.2 to proceed with namespace creation or configuration.
+
+An error will be shown if you try to create an Event Hubs namespace with the minimum TLS version set to TLS 1.0 when a policy with a deny effect requires that the minimum TLS version be set to TLS 1.2.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md)
+- [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)
+- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)
event-hubs Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-client-version.md
+
+ Title: Configure Transport Layer Security (TLS) for an Event Hubs client application
+
+description: Configure a client application to communicate with Azure Event Hubs using a minimum version of Transport Layer Security (TLS).
+++++ Last updated : 04/25/2022+++
+# Configure Transport Layer Security (TLS) for an Event Hubs client application (Preview)
+
+For security purposes, an Azure Event Hubs namespace may require that clients use a minimum version of Transport Layer Security (TLS) to send requests. Calls to Azure Event Hubs will fail if the client is using a version of TLS that is lower than the minimum required version. For example, if a namespace requires TLS 1.2, then a request sent by a client who is using TLS 1.1 will fail.
+
+This article describes how to configure a client application to use a particular version of TLS. For information about how to configure a minimum required version of TLS for an Azure Event Hubs namespace, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-configure-minimum-version.md).
+
+## Configure the client TLS version
+
+In order for a client to send a request with a particular version of TLS, the operating system must support that version.
+
+The following example shows how to set the client's TLS version to 1.2 from .NET. The .NET Framework used by the client must support TLS 1.2. For more information, see [Support for TLS 1.2](/dotnet/framework/network-programming/tls#support-for-tls-12).
+
+# [.NET](#tab/dotnet)
+
+The following sample shows how to enable TLS 1.2 in a .NET client using the Azure.Messaging.ServiceBus client library of Event Hubs:
+
+```csharp
+{
+ // Enable TLS 1.2 before connecting to Event Hubs
+ System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls12;
+
+ // Connection string to your Event Hubs namespace
+ string connectionString = "<NAMESPACE CONNECTION STRING>";
+
+ // Name of your Event Hub
+ string eventHubName = "<EVENT HUB NAME>";
+
+ // The sender used to publish messages to the queue
+ var producer = new EventHubProducerClient(connectionString, eventHubName);
+
+ // Use the producer client to send a message to the Event Hubs queue
+ using EventDataBatch eventBatch = await producer.CreateBatchAsync();
+ var eventData = new EventData("This is an event body");
+
+ if (!eventBatch.TryAdd(eventData))
+ {
+ throw new Exception($"The event could not be added.");
+ }
+}
+```
+++
+## Verify the TLS version used by a client
+
+To verify that the specified version of TLS was used by the client to send a request, you can use [Fiddler](https://www.telerik.com/fiddler) or a similar tool. Open Fiddler to start capturing client network traffic, then execute one of the examples in the previous section. Look at the Fiddler trace to confirm that the correct version of TLS was used to send the request.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md)
+- [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
event-hubs Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-minimum-version.md
+
+ Title: Configure the minimum TLS version for an Event Hubs namespace using ARM
+
+description: Configure an Azure Event Hubs namespace to use a minimum version of Transport Layer Security (TLS).
+++++ Last updated : 04/25/2022+++
+# Configure the minimum TLS version for an Event Hubs namespace using ARM (Preview)
+
+To configure the minimum TLS version for an Event Hubs namespace, set the `MinimumTlsVersion` version property. When you create an Event Hubs namespace with an Azure Resource Manager template, the `MinimumTlsVersion` property is set to 1.2 by default, unless explicitly set to another version.
+
+> [!NOTE]
+> Namespaces created using an api-version prior to 2022-01-01-preview will have 1.0 as the value for `MinimumTlsVersion`. This behavior was the prior default, and is still there for backwards compatibility.
+
+## Create a template to configure the minimum TLS version
+
+To configure the minimum TLS version for an Event Hubs namespace with a template, create a template with the `MinimumTlsVersion` property set to 1.0, 1.1, or 1.2. The following steps describe how to create a template in the Azure portal.
+
+1. In the Azure portal, choose **Create a resource**.
+2. In **Search the Marketplace** , type **custom deployment** , and then press **ENTER**.
+3. Choose **Custom deployment (deploy using custom templates) (preview)**, choose **Create** , and then choose **Build your own template in the editor**.
+4. In the template editor, paste in the following JSON to create a new namespace and set the minimum TLS version to TLS 1.2. Remember to replace the placeholders in angle brackets with your own values.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {
+ "eventHubNamespaceName": "[concat(uniqueString(subscription().subscriptionId), 'tls')]"
+ },
+ "resources": [
+ {
+ "name": "[variables('eventHubNamespaceName')]",
+ "type": "Microsoft.EventHub/namespaces",
+ "apiVersion": "2022-01-01-preview",
+ "location": "westeurope",
+ "properties": {
+ "minimumTlsVersion": "1.2"
+ },
+ "dependsOn": [],
+ "tags": {}
+ }
+ ]
+ }
+ ```
+
+5. Save the template.
+6. Specify resource group parameter, then choose the **Review + create** button to deploy the template and create a namespace with the `MinimumTlsVersion` property configured.
+
+> [!NOTE]
+> After you update the minimum TLS version for the Event Hubs namespace, it may take up to 30 seconds before the change is fully propagated.
+
+Configuring the minimum TLS version requires api-version 2022-01-01-preview or later of the Azure Event Hubs resource provider.
+
+## Test the minimum TLS version from a client
+
+To test that the minimum required TLS version for an Event Hubs namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
+
+When a client accesses an Event Hubs namespace using a TLS version that does not meet the minimum TLS version configured for the namespace, Azure Event Hubs returns error code 401 (Unauthorized) and a message indicating that the TLS version that was used is not permitted for making requests against this Event Hubs namespace.
+
+> [!NOTE]
+> Due to limitations in the confluent library, errors coming from an invalid TLS version will not surface when connecting through the Kafka protocol. Instead a general exception will be shown.
+
+> [!NOTE]
+> When you configure a minimum TLS version for an Event Hubs namespace, that minimum version is enforced at the application layer. Tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the minimum required version when run directly against the Event Hubs namespace endpoint.
+
+## Next steps
+
+See the following documentation for more information.
+
+- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace](transport-layer-security-enforce-minimum-version.md)
+- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
event-hubs Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-enforce-minimum-version.md
+
+ Title: Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace
+
+description: Configure a service bus namespace to require a minimum version of Transport Layer Security (TLS) for clients making requests against Azure Event Hubs.
+++++ Last updated : 04/25/2022+++
+# Enforce a minimum required version of Transport Layer Security (TLS) for requests to an Event Hubs namespace (Preview)
+
+Communication between a client application and an Azure Event Hubs namespace is encrypted using Transport Layer Security (TLS). TLS is a standard cryptographic protocol that ensures privacy and data integrity between clients and services over the Internet. For more information about TLS, see [Transport Layer Security](https://datatracker.ietf.org/wg/tls/about/).
+
+Azure Event Hubs supports choosing a specific TLS version for namespaces. Currently Azure Event Hubs uses TLS 1.2 on public endpoints by default, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
+
+Azure Event Hubs namespaces permit clients to send and receive data with TLS 1.0 and above. To enforce stricter security measures, you can configure your Event Hubs namespace to require that clients send and receive data with a newer version of TLS. If an Event Hubs namespace requires a minimum version of TLS, then any requests made with an older version will fail.
+
+> [!IMPORTANT]
+> If you are using a service that connects to Azure Event Hubs, make sure that that service is using the appropriate version of TLS to send requests to Azure Event Hubs before you set the required minimum version for an Event Hubs namespace.
+
+## Permissions necessary to require a minimum version of TLS
+
+To set the `MinimumTlsVersion` property for the Event Hubs namespace, a user must have permissions to create and manage Event Hubs namespaces. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.EventHub/namespaces/write** or **Microsoft.EventHub/namespaces/\*** action. Built-in roles with this action include:
+
+- The Azure Resource Manager [Owner](../role-based-access-control/built-in-roles.md#owner) role
+- The Azure Resource Manager [Contributor](../role-based-access-control/built-in-roles.md#contributor) role
+- The [Azure Event Hubs Data Owner](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner) role
+
+Role assignments must be scoped to the level of the Event Hubs namespace or higher to permit a user to require a minimum version of TLS for the Event Hubs namespace. For more information about role scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md).
+
+Be careful to restrict assignment of these roles only to those who require the ability to create an Event Hubs namespace or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../role-based-access-control/best-practices.md).
+
+> [!NOTE]
+> The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [**Owner**](../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create and manage Event Hubs namespaces. For more information, see [**Classic subscription administrator roles, Azure roles, and Azure AD administrator roles**](../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
+
+## Network considerations
+
+When a client sends a request to an Event Hubs namespace, the client establishes a connection with the Event Hubs namespace endpoint first, before processing any requests. The minimum TLS version setting is checked after the TLS connection is established. If the request uses an earlier version of TLS than that specified by the setting, the connection will continue to succeed, but the request will eventually fail.
+
+> [!NOTE]
+> Due to limitations in the confluent library, errors coming from an invalid TLS version will not surface when connecting through the Kafka protocol. Instead a general exception will be shown.
+
+Here're a few important points to consider:
+
+- A network trace would show the successful establishment of a TCP connection and successful TLS negotiation, before a 401 is returned if the TLS version used is less than the minimum TLS version configured.
+- Penetration or endpoint scanning on `yournamespace.servicebus.windows.net` will indicate the support for TLS 1.0, TLS 1.1, and TLS 1.2, as the service continues to support all these protocols. The minimum TLS version, enforced at the namespace level, indicates what the lowest TLS version the namespace will support.
+## Next steps
+
+See the following documentation for more information.
+
+- [Configure the minimum TLS version for an Event Hubs namespace](transport-layer-security-configure-minimum-version.md)
+- [Configure Transport Layer Security (TLS) for an Event Hubs client application](transport-layer-security-configure-client-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for an Event Hubs namespace](transport-layer-security-audit-minimum-version.md)
expressroute Bgp Communities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/bgp-communities.md
+
+ Title: Managing complex network architectures with BGP communities - Azure ExpressRoute
+description: Learn how to manage complex networks with BGP community values.
++++ Last updated : 04/20/2022+++
+# Managing complex network architectures with BGP communities
+
+Managing a hybrid network can get increasingly complex as you deploy more ExpressRoute circuits and establish more connections to your workloads in different Azure regions. To help manage the complexity of your network and route traffic from Azure to on-premises efficiently, you can configure BGP communities on your Azure virtual networks.
+
+## What is a BGP community?
+
+A Border Gateway Protocol (BGP) community is a group of IP prefixes that share a common property called a BGP community tag or value. In Azure, you can now:
+
+* Set a custom BGP community value on each of your virtual networks.
+
+* Access a predefined regional BGP community value for all your virtual networks deployed in a region.
+
+Once these values are configured on your virtual networks, ExpressRoute will preserve them on the corresponding private IP prefixes shared with your on-premises. When these prefixes are learned on-premises, they're learned along with the configured BGP community values.
+
+## Using community values for multi-region networks
+
+A common scenario for when to use ExpressRoute is when you want to access workloads deployed in an Azure virtual network. ExpressRoute facilitates the exchange of Azure and on-premises private IP address ranges using a BGP session over a private connection. This feature enables a seamless extension of your existing networks into the cloud.
+
+When you have multiple ExpressRoute connections to virtual networks in different Azure regions, traffic can take more than one path. A hybrid network architecture diagram below demonstrates the emergence of a suboptimal route when establishing a mesh network with multiple regions and ExpressRoute circuits:
++
+To ensure traffic going to **Region A** takes the optimal path over **ER Circuit 1**, the customer could configure a route filter on-premises to ensure that **Region A** routes gets only learned at the customer edge from **ER Circuit 1**, and not learned at all by **ER Circuit 2**. This approach requires you to maintain a comprehensive list of IP prefixes in each region and regularly update this list whenever a new virtual network gets added or a private IP address space gets expanded in the cloud. As you continue to grow your presence in the Cloud, this burden can become excessive.
+
+When virtual network IP prefixes gets learned on-premises with custom and regional BGP community values, you can configure your route filters based on these values instead of specific IP prefixes. When you decide to expand your address spaces or create more virtual networks in an existing region, you don't need to modify your route filter. The route filter will already have rules for the corresponding community values. With the use of BGP communities, your multi-region hybrid networking will be simplified.
+
+## Other uses of BGP communities
+
+Another reason to configure a BGP community value on a virtual network connected to ExpressRoute is to understand where traffic is originating from within an Azure region. As you deploy more virtual networks and adopt more complex network topologies within an Azure region, troubleshooting connectivity and performance issues can become more difficult. With custom BGP community values configured on each virtual network within a region, you can quickly identify where the traffic was originating from in Azure. Being able to identify the source virtual network will help you narrow down your investigation.
+
+## Next steps
+
+Learn how to [configure BGP communities](how-to-configure-custom-bgp-communities-portal.md) using the Azure portal.
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
Modernize your internet first applications on Azure with Cloud Native experience
* Unified static and dynamic delivery offered in a single tier to accelerate and scale your application through caching, SSL offload, and layer 3-4 DDoS protection.
-* Free, autorotation managed SSL certificates that save time and quickly secure apps and content.
+* Free, [autorotation managed SSL certificates](end-to-end-tls.md) that save time and quickly secure apps and content.
* Low entry fee and a simplified cost model that reduces billing complexity by having fewer meters needed to plan for.
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Previously updated : 03/21/2022 Last updated : 05/05/2022
All the operations that are supported that extend the REST API.
| [$validate](validation-against-profiles.md) | Yes | Yes | | | [$member-match](tutorial-member-match.md) | Yes | Yes | | | [$patient-everything](patient-everything.md) | Yes | Yes | |
-| $purge-history | Yes | Yes | |
+| [$purge-history](purge-history.md) | Yes | Yes | |
## Persistence
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/purge-history.md
+
+ Title: Purge history operation for Azure API for FHIR
+description: This article describes the $purge-history operation for Azure API for FHIR.
++++ Last updated : 05/05/2022+++
+# Purge history operation for Azure API for FHIR
+
+`$purge-history` is an operation that allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification.
+
+## Overview of purge history
+
+The `$purge-history` operation was created to help with the management of resource history in Azure API for FHIR. It's uncommon to need to purge resource history. However, it's needed in cases when the system level or resource level versioning policy changes, and you want to clean up existing resource history.
+
+Since `$purge-history` is a resource level operation versus a type level or system level operation, you'll need to run the operation for every resource that you want remove the history from.
+
+## Examples of purge history
+
+To use `$purge-history`, you must add `/$purge-history` to the end of a standard delete request. The template of the request is:
+
+```http
+DELETE <FHIR-Service-Url>/<Resource-Type>/<Resource-Id>/$purge-history
+```
+
+For example:
+
+```http
+DELETE https://workspace-fhir.fhir.azurehealthcareapis.com/Observation/123/$purge-history
+```
+
+## Next steps
+
+In this article, you learned how to purge the history for resources in Azure API for FHIR. For more information about Azure API for FHIR, see
+
+>[!div class="nextstepaction"]
+>[Supported FHIR features](fhir-features-supported.md)
+
+>[!div class="nextstepaction"]
+>[FHIR REST API capabilities for Azure API for FHIR](fhir-rest-api-capabilities.md)
healthcare-apis Events Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-troubleshooting-guide.md
This article provides guides and resources to troubleshoot Events.
> > - **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully. >
-> For more information about the FHIR service delete types, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
+> For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
### Events message structure
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
The final step is to set the import configuration of the FHIR service, which con
> [!NOTE] > If you haven't assigned storage access permissions to the FHIR service, the import operations ($import) will fail.
-To specify the Azure Storage account, you need to use [Rest API](/rest/api/healthcareapis/services/create-or-update) to update the FHIR service.
+To specify the Azure Storage account, you need to use [REST API](/rest/api/healthcareapis/services/create-or-update) to update the FHIR service.
To get the request URL and body, browse to the Azure portal of your FHIR service. Select **Overview**, and then **JSON View**.
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-features-supported.md
Previously updated : 03/01/2022 Last updated : 05/05/2022
All the operations that are supported that extend the REST API.
| [$validate](validation-against-profiles.md) | Yes | Yes | | | [$member-match](tutorial-member-match.md) | Yes | Yes | | | [$patient-everything](patient-everything.md) | Yes | Yes | |
-| $purge-history | Yes | Yes | |
+| [$purge-history](purge-history.md) | Yes | Yes | |
## Role-based access control
healthcare-apis Fhir Versioning Policy And History Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-versioning-policy-and-history-management.md
+
+ Title: Versioning policy and history management for Azure Health Data Services FHIR service
+description: This article describes the concepts of versioning policy and history management for Azure Health Data Services FHIR service.
++++ Last updated : 05/05/2022+++
+# Versioning policy and history management
+
+The versioning policy in the Azure Health Data Services FHIR service is a configuration, which determines how history is stored for every resource type with the option for resource specific configuration. This policy is directly related to the concept of managing history for FHIR resources.
+
+## History in FHIR
+
+History in FHIR gives you the ability to see all previous versions of a resource. History in FHIR can be queried at the resource level, type level, or system level. The HL7 FHIR documentation has more information about the [history interaction](https://www.hl7.org/fhir/http.html#history). History is useful in scenarios where you want to see the evolution of a resource in FHIR or if you want to see the information of a resource at a specific point in time.
+
+All past versions of a resource are considered obsolete and the current version of a resource should be used for normal business workflow operations. However, it can be useful to see the state of a resource as a point in time when a past decision was made.
+
+## Versioning policy
+
+Versioning policy in the FHIR service lets you decide how history is stored either at a FHIR service level or at a specific resource level.
+
+There are three different levels for versioning policy:
+
+- `versioned`: History is stored for operation on resources. Resource version is incremented. This is the default.
+- `version-update`: History is stored for operation on resources. Resource version is incremented. Updates require a valid `If-Match` header. For more information, see [VersionedUpdateExample.http](https://github.com/microsoft/fhir-server/blob/main/docs/rest/VersionedUpdateExample.http).
+- `no-version`: History isn't created for resources. Resource version is incremented.
+
+Versioning policy available to configure at as a system-wide setting and also to override at a resource level. The system-wide setting is used for all resources in your FHIR service, unless a specific resource level versioning policy has been added.
+
+### Versioning policy comparison
+
+| Policy Value | History Behavior | `meta.versionId` Update Behavior | Default |
+| - | | -- | - |
+| `versioned` | History is stored | If-Match not required | Yes |
+| `version-update` | History is stored | If-Match required | No |
+| `no-version` | History isn't stored | If-Match not required | No |
+
+> [!NOTE]
+> Changing the versioning policy to `no-version` has no effect on existing resource history. If history needs to be removed for resources, use the [$purge-history](purge-history.md) operation.
+
+## Configuring versioning policy
+
+To configure versioning policy, select the **Versioning Policy Configuration** blade inside your FHIR service.
+
+[ ![Screenshot of the Azure portal Versioning Policy Configuration.](media/versioning-policy/fhir-service-versioning-policy-configuration.png) ](media/versioning-policy/fhir-service-versioning-policy-configuration.png#lightbox)
+
+After you've browsed to Versioning Policy Configuration, you'll be able to configure the setting at both system level and the resource level (as an override of the system level). The system level configuration (annotated as 1) will apply to every resource in your FHIR service unless a resource specific override (annotated at 2) has been configured.
+
+[ ![Screenshot of Azure portal versioning policy configuration showing system level vs resource level configuration.](media/versioning-policy/system-level-versus-resource-level.png) ](media/versioning-policy/system-level-versus-resource-level.png#lightbox)
+
+When configuring resource level configuration, you'll be able to select the FHIR resource type (annotated as 1) and the specific versioning policy for this specific resource (annotated as 2). Make sure to select the **Add** button (annotated as 3) to queue up this setting for saving.
+
+[ ![Screenshot of Azure portal versioning policy configuration showing resource level configuration.](media/versioning-policy/resource-versioning.jpg) ](media/versioning-policy/resource-versioning.jpg#lightbox)
++
+**Make sure** to select **Save** after you've completed your versioning policy configuration.
+
+[ ![Screenshot of Azure portal versioning policy configuration configuration showing save button.](media/versioning-policy/save-button.jpg) ](media/versioning-policy/save-button.jpg#lightbox)
+
+## History Management
+
+History in FHIR is important for end users to see how a resource has changed over time. It's also useful in coordination with audit logs to see the state of a resource before and after a user modified it. In general, it's recommended to keep history for a resource unless you know that the history isn't needed. Frequent updates of resources can result in a large amount of data storage, which can be undesired in FHIR services with a large amount of data.
+
+Changing the versioning policy either at a system level or resource level won't remove the existing history for any resources in your FHIR service. If you're looking to reduce the history data size in your FHIR service, you must use the [$purge-history](purge-history.md) operation.
+
+## Next steps
+
+In this article, you learned how to purge the history for resources in the FHIR service. For more information about how to disable history and some concepts about history management, see
+
+>[!div class="nextstepaction"]
+>[Purge history operation](purge-history.md)
++
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/purge-history.md
+
+ Title: Purge history operation for Azure Health Data Services FHIR service
+description: This article describes the $purge-history operation for the FHIR service.
++++ Last updated : 05/05/2022+++
+# Purge history operation
+
+`$purge-history` is an operation that allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification, but it's useful for [history management](fhir-versioning-policy-and-history-management.md) in large FHIR service instances.
+
+## Overview of purge history
+
+The `$purge-history` operation was created to help with the management of resource history in FHIR service. It's uncommon to need to purge resource history. However, it's needed in cases when the system level or resource level [versioning policy](fhir-versioning-policy-and-history-management.md) changes, and you want to clean up existing resource history.
+
+Since `$purge-history` is a resource level operation versus a type level or system level operation, you'll need to run the operation for every resource that you want remove the history from.
+
+## Examples of purge history
+
+To use `$purge-history`, you must add `/$purge-history` to the end of a standard delete request. The template of the request is:
+
+```http
+DELETE <FHIR-Service-Url>/<Resource-Type>/<Resource-Id>/$purge-history
+```
+
+For example:
+
+```http
+DELETE https://workspace-fhir.fhir.azurehealthcareapis.com/Observation/123/$purge-history
+```
+
+## Next steps
+
+In this article, you learned how to purge the history for resources in the FHIR service. For more information about how to disable history and some concepts about history management, see
+
+>[!div class="nextstepaction"]
+>[Versioning policy and history management](fhir-versioning-policy-and-history-management.md)
+
+>[!div class="nextstepaction"]
+>[Supported FHIR features](fhir-features-supported.md)
+
+>[!div class="nextstepaction"]
+>[FHIR REST API capabilities for Azure Health Data Services FHIR service](fhir-rest-api-capabilities.md)
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
description: How to configure an IoT Edge device to connect to Azure IoT Edge ga
Previously updated : 02/28/2022 Last updated : 05/03/2022
monikerRange: ">=iotedge-2020-11"
This article provides instructions for establishing a trusted connection between an IoT Edge gateway and a downstream IoT Edge device.
-In a gateway scenario, an IoT Edge device can be both a gateway and a downstream device. Multiple IoT Edge gateways can be layered to create a hierarchy of devices. The downstream (or child) devices can authenticate and send or receive messages through their gateway (or parent) device.
+In a gateway scenario, an IoT Edge device can be both a gateway and a downstream device. Multiple IoT Edge gateways can be layered to create a hierarchy of devices. The downstream (child) devices can authenticate and send or receive messages through their gateway (parent) device.
-There are two different configurations for IoT Edge devices in a gateway hierarchy, and this article address both. The first is the **top layer** IoT Edge device. When multiple IoT Edge devices are connecting through each other, any device that does not have a parent device but connects directly to IoT Hub is considered to be in the top layer. This device is responsible for handling requests from all the devices below it. The other configuration applies to any IoT Edge device in a **lower layer** of the hierarchy. These devices may be a gateway for other downstream IoT and IoT Edge devices, but also need to route any communications through their own parent devices.
+There are two different configurations for IoT Edge devices in a gateway hierarchy, and this article address both. The first is the **top layer** IoT Edge device. When multiple IoT Edge devices are connecting through each other, any device that doesn't have a parent device but connects directly to IoT Hub is considered to be in the top layer. This device is responsible for handling requests from all the devices below it. The other configuration applies to any IoT Edge device in a **lower layer** of the hierarchy. These devices may be a gateway for other downstream IoT and IoT Edge devices, but also need to route any communications through their own parent devices.
Some network architectures require that only the top IoT Edge device in a hierarchy can connect to the cloud. In this configuration, all IoT Edge devices in lower layers of a hierarchy can only communicate with their gateway (or parent) device and any downstream (or child) devices.
-All the steps in this article build on those in [Configure an IoT Edge device to act as a transparent gateway](how-to-create-transparent-gateway.md), which sets up an IoT Edge device to be a gateway for downstream IoT devices. The same basic steps apply to all gateway scenarios:
+All the steps in this article build on [Configure an IoT Edge device to act as a transparent gateway](how-to-create-transparent-gateway.md), which sets up an IoT Edge device to be a gateway for downstream IoT devices. The same basic steps apply to all gateway scenarios:
* **Authentication**: Create IoT Hub identities for all devices in the gateway hierarchy. * **Authorization**: Set up the parent/child relationship in IoT Hub to authorize child devices to connect to their parent device like they would connect to IoT Hub.
Additional device-identity commands, including `add-children`,`list-children`, a
> >Here is an [example of assigning child devices](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/e2e/test/iothub/service/RegistryManagerE2ETests.cs) using the C# SDK. The task `RegistryManager_AddAndRemoveDeviceWithScope()` shows how to programmatically create a three-layer hierarchy. An IoT Edge device is in layer one, as the parent. Another IoT Edge device is in layer two, serving as both a child and a parent. Finally, an IoT device is in layer three, as the lowest layer child device.
-## Prepare certificates
+## Generate certificates
A consistent chain of certificates must be installed across devices in the same gateway hierarchy to establish a secure communication between themselves. Every device in the hierarchy, whether an IoT Edge device or an IoT leaf device, needs a copy of the same root CA certificate. Each IoT Edge device in the hierarchy then uses that root CA certificate as the root for its device CA certificate.
-With this setup, each downstream IoT Edge device or IoT leaf device can verify the identity of their parent by verifying that the edgeHub they connect to has a server certificate that is signed by the shared root CA certificate.
+With this setup, each downstream IoT Edge device can verify the identity of their parent by verifying that the *edgeHub* they connect to has a server certificate that is signed by the shared root CA certificate.
-<!-- TODO: certificate graphic -->
-Create the following certificates:
+For more information about IoT Edge certificate requirements, see
+[Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
-* A **root CA certificate**, which is the topmost shared certificate for all the devices in a given gateway hierarchy. This certificate is installed on all devices.
-* Any **intermediate certificates** that you want to include in the root certificate chain.
-* A **device CA certificate** and its **private key**, generated by the root and intermediate certificates. You need one unique device CA certificate for each IoT Edge device in the gateway hierarchy.
+01. Create or request the following certificates:
-You can use either a self-signed certificate authority or purchase one from a trusted commercial certificate authority like Baltimore, Verisign, Digicert, or GlobalSign.
+ * A **root CA certificate**, which is the topmost shared certificate for all the devices in a given gateway hierarchy. This certificate is installed on all devices.
+ * Any **intermediate certificates** that you want to include in the root certificate chain.
+ * A **device CA certificate** and its **private key**, generated by the root and intermediate certificates. You need one unique device CA certificate for each IoT Edge device in the gateway hierarchy.
-If you don't have your own certificates to use, you can [create demo certificates to test IoT Edge device features](how-to-create-test-certificates.md). Follow the steps in that article to create one set of root and intermediate certificates, then to create IoT Edge device CA certificates for each of your devices.
+ You can use either a self-signed certificate authority or purchase one from a trusted commercial certificate authority like Baltimore, Verisign, Digicert, or GlobalSign.
-## Configure IoT Edge on devices
+01. If you don't have your own certificates to use for test, create one set of root and intermediate certificates, then create IoT Edge device CA certificates for each device. In this article, we'll use test certificates generated using [test CA certificates for samples and tutorials](https://github.com/Azure/iotedge/tree/main/tools/CACertificates).
+For example, the following commands create a root CA certificate, a parent device certificate, and a child device certificate.
-The steps for setting up IoT Edge as a gateway is very similar to the steps for setting up IoT Edge as a downstream device.
+ ```bash
+ # !!! For test only - do not use in production !!!
+
+ # Create the the root CA test certificate
+ ./certGen.sh create_root_and_intermediate
+
+ # Create the parent (gateway) device test certificate
+ # signed by the shared root CA certificate
+ ./certGen.sh create_edge_device_ca_certificate "gateway"
+
+ # Create the child (downstream) device test certificate
+ # signed by the shared root CA certificate
+ ./certGen.sh create_edge_device_ca_certificate "downstream"
+ ```
+
+ > [!WARNING]
+ > Do not use certificates created by the test scripts for production. They contain hard-coded passwords and expire by default after 30 days. The test CA certificates are provided for demonstration purposes to help you quickly understand CA Certificates. Use your own security best practices for certification creation and lifetime management in production.
-To enable gateway discovery, every IoT Edge gateway device needs to be configured with a **hostname** that its child devices will use to find it on the local network. Every downstream IoT Edge device needs to be configured with a **parent_hostname** to connect to. If a single IoT Edge device is both a parent and a child device, it needs both parameters.
+ For more information about creating test certificates, see [create demo certificates to test IoT Edge device features](how-to-create-test-certificates.md).
-To enable secure connections, every IoT Edge device in a gateway scenario needs to be configured with an unique device CA certificate and a copy of the root CA certificate shared by all devices in the gateway hierarchy.
+01. You'll need to transfer the certificates and keys to each device. You can use a USB drive, a service like [Azure Key Vault](../key-vault/general/overview.md), or with a function like [Secure file copy](https://www.ssh.com/ssh/scp/). Choose one of these methods that best matches your scenario.
-You should already have IoT Edge installed on your device. If not, follow the steps to [Manually provision a single Linux IoT Edge device](how-to-provision-single-device-linux-symmetric.md).
+For more information on installing certificates on a device, see [Manage certificates on an IoT Edge device](how-to-manage-device-certificates.md).
-The steps in this section reference the **root CA certificate** and **device CA certificate and private key** that were discussed earlier in this article. If you created those certificates on a different device, have them available on this device. You can transfer the files physically, like with a USB drive, with a service like [Azure Key Vault](../key-vault/general/overview.md), or with a function like [Secure file copy](https://www.ssh.com/ssh/scp/).
+## Configure parent device
-Use the following steps to configure IoT Edge on your device.
+To configure your parent device, open a local or remote command shell.
-Make sure that the user **iotedge** has read permissions for the directory holding the certificates and keys.
+To enable secure connections, every IoT Edge parent device in a gateway scenario needs to be configured with a unique device CA certificate and a copy of the root CA certificate shared by all devices in the gateway hierarchy.
-1. Install the **root CA certificate** on this IoT Edge device.
+01. Transfer the **root CA certificate**, **parent device CA certificate**, and **parent private key** to the parent device. The examples in this article use the directory `/var/secrets` for the certificates and keys directory.
- * **Debian/Ubuntu**
- ```bash
- sudo cp <path>/<root ca certificate>.pem /usr/local/share/ca-certificates/<root ca certificate>.pem.crt
- ```
+01. Install the **root CA certificate** on the parent IoT Edge device. First, copy the root certificate into the certificate directory and add `.crt` to the end of the file name. Next, update the certificate store on the device using the platform-specific command.
- * **IoT Edge for Linux on Windows (EFLOW)**
- ```bash
- sudo cp <path>/<root ca certificate>.pem /etc/pki/ca-trust/source/anchors/<root ca certificate>.pem.crt
- ```
+ **Debian or Ubuntu:**
-1. Update the certificate store.
+ ```bash
+ sudo cp /var/secrets/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
- * **Debian/Ubuntu**
- ```bash
- sudo update-ca-certificates
- ```
- This command should output that one certificate was added to /etc/ssl/certs.
+ sudo update-ca-certificates
+ ```
+ **IoT Edge for Linux on Windows (EFLOW):**
- * **IoT Edge for Linux on Windows (EFLOW)**
- ```bash
- sudo update-ca-trust
- ```
- For more information, check [CBL-Mariner SSL CA certificates management](https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/ca-certificates.md).
-
-
-1. Open the IoT Edge configuration file.
+ ```bash
+ sudo cp /var/secrets/azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.cert.pem.crt
- ```bash
- sudo nano /etc/aziot/config.toml
- ```
+ sudo update-ca-trust
+ ```
+ For more information about using `update-ca-trust`, see [CBL-Mariner SSL CA certificates management](https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/ca-certificates.md).
+
+The command reports one certificate was added to `/etc/ssl/certs`.
- >[!TIP]
- >If the config file doesn't exist on your device yet, use the following command to create it based on the template file:
- >
- >```bash
- >sudo cp /etc/aziot/config.toml.edge.template /etc/aziot/config.toml
- >```
+```output
+Updating certificates in /etc/ssl/certs...
+1 added, 0 removed; done.
+```
-1. Find the **Hostname** section in the config file. Uncomment the line that contains the `hostname` parameter, and update the value to be the fully qualified domain name (FQDN) or the IP address of the IoT Edge device.
+### Update parent configuration file
- The value of this parameter is what downstream devices will use to connect to this gateway. The hostname takes the machine name by default, but the FQDN or IP address is required to connect downstream devices.
+You should already have IoT Edge installed on your device. If not, follow the steps to
+[Manually provision a single Linux IoT Edge device](how-to-provision-single-device-linux-symmetric.md).
- Use a hostname shorter than 64 characters, which is the character limit for a server certificate common name.
+01. Verify the `/etc/aziot/config.toml` configuration file exists on the parent device.
- Be consistent with the hostname pattern across a gateway hierarchy. Use either FQDNs or IP addresses, but not both.
+ If the config file doesn't exist on your device, use the following command to create it based on the template file:
-1. *If this device is a child device*, find the **Parent hostname** section. Uncomment and update the `parent_hostname` parameter to be the FQDN or IP address of the parent device, matching whatever was provided as the hostname in the parent device's config file.
+ ```bash
+ sudo cp /etc/aziot/config.toml.edge.template /etc/aziot/config.toml
+ ```
+ You can also use the template file as a reference to add configuration parameters in this section.
- ```toml
- parent_hostname = "my-parent-device"
- ```
+01. Open the IoT Edge configuration file using an editor. For example, use the `nano` editor to open the `/etc/aziot/config.toml` file.
-1. Find the **Trust bundle cert** section. Uncomment and update the `trust_bundle_cert` parameter with the file URI to the root CA certificate on your device.
+ ```bash
+ sudo nano /etc/aziot/config.toml
+ ```
-1. Verify your IoT Edge device will use the correct version of the IoT Edge agent when it starts up.
+01. Find the **hostname** parameter or add it to the beginning of the configuration file. Update the value to be the fully qualified domain name (FQDN) or the IP address of the IoT Edge parent device. For example:
- Find the **Default Edge Agent** section and verify the image value is IoT Edge version 1.2. If not, update it:
+ ```toml
+ hostname = "10.0.0.4"
+ ```
- ```toml
- [agent.config]
- image: "mcr.microsoft.com/azureiotedge-agent:1.2"
- ```
+ To enable gateway discovery, every IoT Edge gateway (parent) device needs to specify a
+ **hostname** parameter that its child devices will use to find it on the local network. Every
+ downstream IoT Edge device needs to specify a **parent_hostname** parameter to identify its
+ parent. In a hierarchical scenario where a single IoT Edge device is both a parent and a child
+ device, it needs both parameters.
-1. Find the **Edge CA certificate** section in the config file. Uncomment the lines in this section and provide the file URI paths for the certificate and key files on the IoT Edge device.
+ The *hostname*, *local_gateway_hostname*, and *trust_bundle_cert* parameters, must be at the beginning of the configuration file before any sections. Adding the parameter before defined sections, ensures it's applied correctly.
- ```toml
- [edge_ca]
- cert = "file:///<path>/<device CA cert>"
- pk = "file:///<path>/<device CA key>"
- ```
+ Use a hostname shorter than 64 characters, which is the character limit for a server certificate
+ common name.
-1. Save (`Ctrl+O`) and close (`Ctrl+X`) the config file.
+ Be consistent with the hostname pattern across a gateway hierarchy. Use either FQDNs or IP
+ addresses, but not both. FQDN or IP address is required to connect downstream devices.
-1. If you've used any other certificates for IoT Edge before, delete the files in the following two directories to make sure that your new certificates get applied:
+ Set the hostname before the *edgeHub* container is created. If *edgeHub* is running, changing the hostname in the configuration file won't take effect until the container is recreated. For more information on how to verify the hostname is applied, see the [verify parent configuration](#verify-parent-configuration) section.
- * `/var/lib/aziot/certd/certs`
- * `/var/lib/aziot/keyd/keys`
+01. Find the **Trust bundle cert** parameter or add it to the beginning of the configuration file.
-1. Apply your changes.
+ Update the `trust_bundle_cert` parameter with the file URI to the root CA certificate on your
+ device. For example:
- ```bash
- sudo iotedge config apply
- ```
+ ```toml
+ trust_bundle_cert = "file:///var/secrets/azure-iot-test-only.root.ca.cert.pem"
+ ```
-1. Check for any errors in the configuration.
+01. Find or add the **Edge CA certificate** section in the config file. Update the certificate `cert` and private key `pk` parameters with the file URI paths for the certificate and key files on the parent IoT Edge device. IoT Edge requires the certificate and private key to be in text-based privacy-enhanced mail (PEM) format. For example:
- ```bash
- sudo iotedge check --verbose
- ```
+ ```toml
+ [edge_ca]
+ cert = "file:///var/secrets/iot-edge-device-ca-gateway.cert.pem"
+ pk = "file:///var/secrets/iot-edge-device-ca-gateway.key.pem"
+ ```
+
+01. Verify your IoT Edge device uses the correct version of the IoT Edge agent when it starts. Find the **Default Edge Agent** section and set the image value for IoT Edge to version 1.2. For example:
+
+ ```toml
+ [agent.config]
+ image: "mcr.microsoft.com/azureiotedge-agent:1.2"
+ ```
+
+01. The beginning of your parent configuration file should look similar to the following example.
+
+ ```toml
+ hostname = "10.0.0.4"
+ trust_bundle_cert = "file:///var/secrets/azure-iot-test-only.root.ca.cert.pem"
+
+ [edge_ca]
+ cert = "file:///var/secrets/iot-edge-device-ca-gateway.cert.pem"
+ pk = "file:///var/secrets/iot-edge-device-ca-gateway.key.pem"
+ ```
+
+01. Save and close the `config.toml` configuration file. For example if you're using the **nano** editor, select **Ctrl+O** - *Write Out*, **Enter**, and **Ctrl+X** - *Exit*.
+
+01. If you've used any other certificates for IoT Edge before, delete the files in the following two directories to make sure that your new certificates get applied:
+
+ * `/var/lib/aziot/certd/certs`
+ * `/var/lib/aziot/keyd/keys`
+
+01. Apply your changes.
+
+ ```bash
+ sudo iotedge config apply
+ ```
+
+01. Check for any errors in the configuration.
+
+ ```bash
+ sudo iotedge check --verbose
+ ```
+
+### Verify parent configuration
+
+The *hostname* must be a qualified domain name (FQDN) or the IP address of the IoT Edge device because IoT Edge uses this value in the server certificate when downstream devices connect. The values must match or you'll get *IP address mismatch* error.
+
+To verify the *hostname*, you need to inspect the environment variables of the *edgeHub* container.
+
+01. List the running IoT Edge containers.
+
+ ```bash
+ iotedge list
+ ```
+
+ Verify *edgeAgent* and *edgeHub* containers are running. The command output should be similar to the following example.
+
+ ```output
+ NAME STATUS DESCRIPTION CONFIG
+ SimulatedTemperatureSensor running Up 5 seconds mcr.microsoft.com/azureiotedge-simulated-temperature-sensor:1.0
+ edgeAgent running Up 17 seconds mcr.microsoft.com/azureiotedge-agent:1.2
+ edgeHub running Up 6 seconds mcr.microsoft.com/azureiotedge-hub:1.2
+ ```
+01. Inspect the *edgeHub* container.
+
+ ```bash
+ sudo docker inspect edgeHub
+ ```
+
+01. In the output, find the **EdgeDeviceHostName** parameter in the *Env* section.
+
+ ```json
+ "EdgeDeviceHostName=10.0.0.4"
+ ```
+
+01. Verify the *EdgeDeviceHostName* parameter value matches the `config.toml` *hostname* setting. If it doesn't match, the *edgeHub* container was running when you modified and applied the configuration. To update the *EdgeDeviceHostName*, remove the *edgeAgent* container.
+
+ ```bash
+ sudo docker rm -f edgeAgent
+ ```
+
+ The *edgeAgent* and *edgeHub* containers are recreated and started within a few minutes. Once *edgeHub* container is running, inspect the container and verify the *EdgeDeviceHostName* parameter matches the configuration file.
+
+## Configure child device
+
+To configure your child device, open a local or remote command shell.
+
+To enable secure connections, every IoT Edge child device in a gateway scenario needs to be configured with a unique device CA certificate and a copy of the root CA certificate shared by all devices in the gateway hierarchy.
+
+01. Transfer the **root CA certificate**, **child device CA certificate**, and **child private key** to the child device. The examples in this article use the directory `/var/secrets` for the certificates and keys directory.
+
+01. Install the **root CA certificate** on the child IoT Edge device. First, copy the root certificate into the certificate directory and add `.crt` to the end of the file name. Next, update the certificate store on the device using the platform-specific command.
+
+ **Debian or Ubuntu:**
+
+ ```bash
+ sudo cp /var/secrets/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
+
+ sudo update-ca-certificates
+ ```
+
+ **IoT Edge for Linux on Windows (EFLOW):**
+
+ ```bash
+ sudo cp /var/secrets/azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.cert.pem.crt
+
+ sudo update-ca-trust
+ ```
+ For more information about using `update-ca-trust`, see [CBL-Mariner SSL CA certificates management](https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/ca-certificates.md).
+
+The command reports one certificate was added to `/etc/ssl/certs`.
+
+```output
+Updating certificates in /etc/ssl/certs...
+1 added, 0 removed; done.
+```
+
+### Update child configuration file
+
+You should already have IoT Edge installed on your device. If not, follow the steps to
+[Manually provision a single Linux IoT Edge device](how-to-provision-single-device-linux-symmetric.md).
+
+01. Verify the `/etc/aziot/config.toml` configuration file exists on the child device.
+
+ If the config file doesn't exist on your device, use the following command to create it based on the template file:
+
+ ```bash
+ sudo cp /etc/aziot/config.toml.edge.template /etc/aziot/config.toml
+ ```
+ You can also use the template file as a reference to add configuration parameters in this section.
+
+01. Open the IoT Edge configuration file using an editor. For example, use the `nano` editor to open the `/etc/aziot/config.toml` file.
+
+ ```bash
+ sudo nano /etc/aziot/config.toml
+ ```
+
+01. Find the **parent_hostname** parameter or add it to the beginning of the configuration file
+ Every downstream IoT Edge device needs to specify a **parent_hostname** parameter to identify
+ its parent. Update the `parent_hostname` parameter to be the FQDN or IP address of the parent
+ device, matching whatever was provided as the hostname in the parent device's config file. For
+ example:
+
+ ```toml
+ parent_hostname = "10.0.0.4"
+ ```
+
+01. Find the **Trust bundle cert** parameter or add it to the beginning of the configuration file.
+
+ Update the `trust_bundle_cert` parameter with the file URI to the root CA certificate on your
+ device. For example:
+
+ ```toml
+ trust_bundle_cert = "file:///var/secrets/azure-iot-test-only.root.ca.cert.pem"
+ ```
+
+01. Find or add the **Edge CA certificate** section in the configuration file. Update the certificate `cert` and private key `pk` parameters with the file URI paths for the certificate and key files on the IoT Edge child device. IoT Edge requires the certificate and private key to be in text-based privacy-enhanced mail (PEM) format. For example:
+
+ ```toml
+ [edge_ca]
+ cert = "file:///var/secrets/iot-edge-device-ca-downstream.cert.pem"
+ pk = "file:///var/secrets/iot-edge-device-ca-downstream.key.pem"
+ ```
+
+01. Verify your IoT Edge device uses the correct version of the IoT Edge agent when it starts. Find the **Default Edge Agent** section and set the image value for IoT Edge to version 1.2. For example:
+
+ ```toml
+ [agent.config]
+ image: "mcr.microsoft.com/azureiotedge-agent:1.2"
+ ```
+
+01. The beginning of your child configuration file should look similar to the following example.
+
+ ```toml
+ parent_hostname = "10.0.0.4"
+ trust_bundle_cert = "file:///var/secrets/azure-iot-test-only.root.ca.cert.pem"
+
+ [edge_ca]
+ cert = "file:///var/secrets/iot-edge-device-ca-downstream.cert.pem"
+ pk = "file:///var/secrets/iot-edge-device-ca-downstream.key.pem"
+ ```
+
+01. Save and close the `config.toml` configuration file. For example if you're using the **nano** editor, select **Ctrl+O** - *Write Out*, **Enter**, and **Ctrl+X** - *Exit*.
+
+01. If you've used any other certificates for IoT Edge before, delete the files in the following two directories to make sure that your new certificates get applied:
+
+ * `/var/lib/aziot/certd/certs`
+ * `/var/lib/aziot/keyd/keys`
+
+01. Apply your changes.
+
+ ```bash
+ sudo iotedge config apply
+ ```
+
+01. Check for any errors in the configuration.
+
+ ```bash
+ sudo iotedge check --verbose
+ ```
+
+ >[!TIP]
+ >The IoT Edge check tool uses a container to perform some of the diagnostics check. If you want to use this tool on downstream IoT Edge devices, make sure they can access `mcr.microsoft.com/azureiotedge-diagnostics:latest`, or have the container image in your private container registry.
+
+### Verify connectivity from child to parent
+
+01. Verify the TLS/SSL connection from the child to the parent by running the following `openssl` command on the child device. Replace `<parent hostname>` with the FQDN or IP address of the parent.
+
+ ```bash
+ echo | openssl s_client -connect <parent hostname>:8883 2> | openssl x509 -text
+ ```
- >[!TIP]
- >The IoT Edge check tool uses a container to perform some of the diagnostics check. If you want to use this tool on downstream IoT Edge devices, make sure they can access `mcr.microsoft.com/azureiotedge-diagnostics:latest`, or have the container image in your private container registry.
+ The command should return the certificate chain similar to the following example.
+
+ ```Output
+ azureUser@child-vm:~$ echo | openssl s_client -connect 10.0.0.4:8883 2> | openssl x509 -text
+
+ Certificate:
+ Data:
+ Version: 3 (0x2)
+ Serial Number: 0 (0x0)
+ Signature Algorithm: sha256WithRSAEncryption
+ Issuer: CN = gateway.ca
+ Validity
+ Not Before: Apr 27 16:25:44 2022 GMT
+ Not After : May 26 14:43:24 2022 GMT
+ Subject: CN = 10.0.0.4
+ Subject Public Key Info:
+ Public Key Algorithm: rsaEncryption
+ RSA Public-Key: (2048 bit)
+ Modulus:
+ 00:b2:a6:df:d9:91:43:4e:77:d8:2c:2a:f7:01:b1:
+ ...
+ 33:bd:c8:f0:de:07:36:2c:0d:06:9e:89:22:95:5e:
+ 3b:43
+ Exponent: 65537 (0x10001)
+ X509v3 extensions:
+ X509v3 Extended Key Usage:
+ TLS Web Server Authentication
+ X509v3 Subject Alternative Name:
+ DNS:edgehub, IP Address:10.0.0.4
+ Signature Algorithm: sha256WithRSAEncryption
+ 76:d4:5b:4a:d5:c4:80:7d:32:bc:c0:a8:ce:4f:69:5d:4d:ee:
+ ...
+ ```
+
+ The `Subject: CN = ` value should match the **hostname** parameter specified in the parent's `config.toml` configuration file.
+
+ If the command times out, there may be blocked ports between the child and parent devices. Review the network configuration and settings for the devices.
## Network isolate downstream devices
Some network architectures, like those that follow the ISA-95 standard, seek to
This network configuration requires that only the IoT Edge device in the top layer of a gateway hierarchy has direct connections to the cloud. IoT Edge devices in the lower layers can only communicate with their parent device or any children devices. Special modules on the gateway devices enable this scenario, including:
-* The **API proxy module** is required on any IoT Edge gateway that has another IoT Edge device below it. That means it must be on *every layer* of a gateway hierarchy except the bottom layer. This module uses an [nginx](https://nginx.org) reverse proxy to route HTTP data through network layers over a single port. It is highly configurable through its module twin and environment variables, so can be adjusted to fit your gateway scenario requirements.
+* The **API proxy module** is required on any IoT Edge gateway that has another IoT Edge device below it. That means it must be on *every layer* of a gateway hierarchy except the bottom layer. This module uses an [nginx](https://nginx.org) reverse proxy to route HTTP data through network layers over a single port. It's highly configurable through its module twin and environment variables, so can be adjusted to fit your gateway scenario requirements.
* The **Docker registry module** can be deployed on the IoT Edge gateway at the *top layer* of a gateway hierarchy. This module is responsible for retrieving and caching container images on behalf of all the IoT Edge devices in lower layers. The alternative to deploying this module at the top layer is to use a local registry, or to manually load container images onto devices and set the module pull policy to **never**.
For each gateway device in a lower layer, network operators need to:
The IoT Edge device at the top layer of a gateway hierarchy has a set of required modules that must be deployed to it, in addition to any workload modules you may run on the device.
-The API proxy module was designed to be customized to handle most common gateway scenarios. This article provides and example to set up the modules in a basic configuration. Refer to [Configure the API proxy module for your gateway hierarchy scenario](how-to-configure-api-proxy-module.md) for more detailed information and examples.
+The API proxy module was designed to be customized to handle most common gateway scenarios. This article provides an example to set up the modules in a basic configuration. Refer to [Configure the API proxy module for your gateway hierarchy scenario](how-to-configure-api-proxy-module.md) for more detailed information and examples.
# [Portal](#tab/azure-portal)
IoT Edge devices in lower layers of a gateway hierarchy have one required module
Before discussing the required proxy module for IoT Edge devices in gateway hierarchies, it's important to understand how IoT Edge devices in lower layers get their module images.
-If your lower layer devices can't connect to the cloud, but you want them to pull module images as usual, then the top layer device of the gateway hierarchy must be configured to handle these requests. The top layer device needs to run a Docker **registry** module that is mapped to your container registry. Then, configure the API proxy module to route container requests to it. Those details are discussed in the earlier sections of this article. In this configuration, the lower layer devices should not point to cloud container registries, but to the registry running in the top layer.
+If your lower layer devices can't connect to the cloud, but you want them to pull module images as usual, then the top layer device of the gateway hierarchy must be configured to handle these requests. The top layer device needs to run a Docker **registry** module that is mapped to your container registry. Then, configure the API proxy module to route container requests to it. Those details are discussed in the earlier sections of this article. In this configuration, the lower layer devices shouldn't point to cloud container registries, but to the registry running in the top layer.
For example, instead of calling `mcr.microsoft.com/azureiotedge-api-proxy:1.1`, lower layer devices should call `$upstream:443/azureiotedge-api-proxy:1.1`.
iot-edge How To Publish Subscribe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-publish-subscribe.md
Sending a direct method is an HTTP call and so doesn't go through the MQTT broke
To connect two MQTT brokers, the IoT Edge hub includes an MQTT bridge. An MQTT bridge is commonly used to connect a running MQTT broker to another MQTT broker. Only a subset of the local traffic is typically pushed to another broker. > [!NOTE]
-> The IoT Edge hub bridge can currently only be used between nested IoT Edge devices. It can't be used to send data to IoT Hub since IoT Hub isn't a full-featured MQTT broker. To learn more about IoT Hub MQTT broker features support, see [Communicate with your IoT hub using the MQTT protocol](../iot-hub/iot-hub-mqtt-support.md). To learn more about nesting IoT Edge devices, see [Connect a downstream IoT Edge device to an Azure IoT Edge gateway](how-to-connect-downstream-iot-edge-device.md#configure-iot-edge-on-devices).
+> The IoT Edge hub bridge can currently only be used between nested IoT Edge devices. It can't be used to send data to IoT Hub since IoT Hub isn't a full-featured MQTT broker. To learn more about IoT Hub MQTT broker features support, see [Communicate with your IoT hub using the MQTT protocol](../iot-hub/iot-hub-mqtt-support.md). To learn more about nesting IoT Edge devices, see [Connect a downstream IoT Edge device to an Azure IoT Edge gateway](how-to-connect-downstream-iot-edge-device.md).
In a nested configuration, the IoT Edge hub MQTT bridge acts as a client of the parent MQTT broker, so authorization rules must be set on the parent EdgeHub to allow the child EdgeHub to publish and subscribe to specific user-defined topics that the bridge is configured for.
The following JSON snippet is an example of an IoT Edge MQTT bridge configuratio
```json {
- "schemaVersion": "1.2",
- "mqttBroker": {
- "bridges": [{
- "endpoint": "$upstream",
- "settings": [{
- "direction": "in",
- "topic": "alerts/#"
- },
- {
- "direction": "out",
- "topic": "#",
- "inPrefix": "/local/telemetry/",
- "outPrefix": "/remote/messages/"
- }
- ]
- }]
- }
+ "schemaVersion": "1.2",
+ "mqttBroker": {
+ "bridges": [{
+ "endpoint": "$upstream",
+ "settings": [{
+ "direction": "in",
+ "topic": "alerts/#"
+ },
+ {
+ "direction": "out",
+ "topic": "#",
+ "inPrefix": "/local/telemetry/",
+ "outPrefix": "/remote/messages/"
+ }
+ ]
+ }]
+ }
} ``` > [!NOTE]
-> The MQTT protocol will automatically be used as upstream protocol when the MQTT broker is used and IoT Edge is in a nested configuration, for example, with a `parent_hostname` specified. To learn more about upstream protocols, see [Cloud communication](iot-edge-runtime.md#cloud-communication). To learn more about nested configurations, see [Connect a downstream IoT Edge device to an Azure IoT Edge gateway](how-to-connect-downstream-iot-edge-device.md#configure-iot-edge-on-devices).
+> The MQTT protocol will automatically be used as upstream protocol when the MQTT broker is used and IoT Edge is in a nested configuration, for example, with a `parent_hostname` specified. To learn more about upstream protocols, see [Cloud communication](iot-edge-runtime.md#cloud-communication). To learn more about nested configurations, see [Connect a downstream IoT Edge device to an Azure IoT Edge gateway](how-to-connect-downstream-iot-edge-device.md).
## Next steps
iot-edge Tutorial Nested Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge.md
description: This tutorial shows you how to create a hierarchical structure of I
Previously updated : 2/26/2021 Last updated : 05/04/2022
To use the `iotedge-config` tool to create and configure your hierarchy, follow
In the **iothub** section, populate the `iothub_hostname` and `iothub_name` fields with your information. This information can be found on the overview page of your IoT Hub on the Azure portal.
- In the optional **certificates** section, you can populate the fields with the absolute paths to your certificate and key. If you leave these fields blank, the script will automatically generate self-signed test certificates for your use. If you're unfamiliar with how certificates are used in a gateway scenario, check out [the how-to guide's certificate section](how-to-connect-downstream-iot-edge-device.md#prepare-certificates).
+ In the optional **certificates** section, you can populate the fields with the absolute paths to your certificate and key. If you leave these fields blank, the script will automatically generate self-signed test certificates for your use. If you're unfamiliar with how certificates are used in a gateway scenario, check out [the how-to guide's certificate section](how-to-connect-downstream-iot-edge-device.md#generate-certificates).
In the **configuration** section, the `template_config_path` is the path to the `device_config.toml` template used to create your device configurations. The `default_edge_agent` field determines what Edge Agent image lower layer devices will pull and from where.
To configure the IoT Edge runtime, you need to apply the configuration bundles c
On the **top layer device**, you will receive a prompt to enter the hostname. On the **lower layer device**, it will ask for the hostname and parent's hostname. Supply the appropriate IP or FQDN for each prompt. You can use either, but be consistent in your choice across devices. The output of the install script is pictured below.
- If you want a closer look at what modifications are being made to your device's configuration file, see [the configure IoT Edge on devices section of the how-to guide](how-to-connect-downstream-iot-edge-device.md#configure-iot-edge-on-devices).
+ If you want a closer look at what modifications are being made to your device's configuration file, see [the configure IoT Edge on devices section of the how-to guide](how-to-connect-downstream-iot-edge-device.md#configure-parent-device).
![Installing the configuration bundles will update the config.toml files on your device and restart all IoT Edge services automatically](./media/tutorial-nested-iot-edge/configuration-install-output.png)
key-vault Soft Delete Change https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-change.md
# Soft-delete will be enabled on all key vaults > [!WARNING]
-> Breaking change: the ability to opt out of soft-delete will be deprecated soon. Azure Key Vault users and administrators should enable soft-delete on their key vaults immediately.
->
-> For Azure Key Vault Managed HSM, soft-delete is enabled by default and can't be disabled.
+> Breaking change: you must enable soft-delete on your key vaults immediately. See below for details.
-When a secret is deleted from a key vault without soft-delete protection, the secret is permanently deleted. Users can currently opt out of soft-delete during key vault creation. However, Microsoft will soon enable soft-delete protection on all key vaults to protect secrets from accidental or malicious deletion by a user. Users will no longer be able to opt out of or turn off soft-delete.
+If a secret is deleted and the key vault does not have soft-delete protection, it is deleted permanently. Although users can currently opt out of soft-delete during key vault creation, this ability is depreciated. **In February 2025, Microsoft will enable soft-delete protection on all key vaults, and users will no longer be able to opt out of or turn off soft-delete.** This will protect secrets from accidental or malicious deletion by a user.
:::image type="content" source="../media/softdeletediagram.png" alt-text="Diagram showing how a key vault is deleted with soft-delete protection versus without soft-delete protection.":::
For full details on the soft-delete functionality, see [Azure Key Vault soft-del
## Can my application work with soft-delete enabled?
-> [!Important]
-> Review the following information carefully before turning on soft-delete for your key vaults.
- Key vault names are globally unique. The names of secrets stored in a key vault are also unique. You won't be able to reuse the name of a key vault or key vault object that exists in the soft-deleted state. For example, if your application programmatically creates a key vault named "Vault A" and later deletes "Vault A," the key vault will be moved to the soft-deleted state. Your application won't be able to re-create another key vault named "Vault A" until the key vault is purged from the soft-deleted state.
-Also, if your application creates a key named `test key` in "Vault A" and later deletes that key, your application won't be able to create a new key named `test key` in "Vault A" until the `test key` object is purged from the soft-deleted state.
+Also, if your application creates a key named `test key` in "Vault A" and later deletes that key, your application won't be able to create a new key named `test key` in "Vault A" until the `test key` object is purged from the soft-deleted state.
-Attempting to delete a key vault object and re-create it with the same name without purging it from the soft-deleted state first can cause conflict errors. These errors might cause your applications or automation to fail. Consult your dev team before you make the following required application and administration changes.
+Attempting to delete a key vault object and re-create it with the same name without purging it from the soft-deleted state first can cause conflict errors. These errors might cause your applications or automation to fail. Consult your dev team before you make the following required application and administration changes.
### Application changes
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
For more information, see [Load balancer limits](../azure-resource-manager/manag
- A standalone virtual machine resource, availability set resource, or virtual machine scale set resource can reference one SKU, never both. - [Move operations](../azure-resource-manager/management/move-resource-group-and-subscription.md): - Resource group move operations (within same subscription) **are supported** for Standard Load Balancer and Standard Public IP.
- - [Subscription group move operations](../azure-resource-manager/management/move-support-resources.md) are **not** supported for Standard Load Balancer and Standard Public IP resources.
+ - [Subscription group move operations](../azure-resource-manager/management/move-support-resources.md) are **not** supported for Standard Load Balancers.
## Next steps
load-testing Monitor Load Testing Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/monitor-load-testing-reference.md
+
+ Title: Monitor Azure Load Testing data reference
+description: Important reference material needed when you monitor Azure Load Testing
+++++ Last updated : 04/22/2022+
+# Monitor Azure Load Testing data reference
+
+Learn about the data and resources collected by Azure Monitor from your Azure Load Testing instance. See [Monitor Azure Load Testing](monitor-load-testing.md) for details on collecting and analyzing monitoring data.
+
+## Resource logs
+
+This section lists the types of resource logs you can collect for Azure Load Testing.
++
+### Operational logs
+
+Operational log entries include elements listed in the following table:
+
+|Name |Description |
+|||
+|TimeGenerated | Date and time when the record was created |
+|RequestMethod | HTTP Method of the API request |
+|HttpStatusCode | HTTP status code of the API response |
+|CorrelationId | Unique identifier to be used to correlate logs |
+|RequestId | Unique identifier to be used to correlate request logs |
+|Identity | JSON structure containing information about the caller |
+|RequestBody | Request body of the API calls |
+|ResourceRegion | Region where the resource is located |
+|ServiceLocation |Location of the service which processed the request |
+|RequestUri |URI of the API request |
+|OperationId |Operation identifier for rest api |
+|OperationName |Name of the operation attempted on the resource |
+|ResultType | Indicates if the request was successful or failed |
+|DurationMs |Amount of time it took to process request in milliseconds |
+|CallerIpAddress |IP Address of the client that submitted the request |
+|FailureDetails |Details of the error in case if request is failed |
+|UserAgent |HTTP header passed by the client, if applicable |
+|OperationVersion | Request api version |
+
+## See Also
+
+<!-- replace below with the proper link to your main monitoring service article -->
+- See [Monitor Azure Load Testing](monitor-load-testing.md) for a description of monitoring Azure Load Testing.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
load-testing Monitor Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/monitor-load-testing.md
+
+ Title: Monitor Azure Load Testing
+description: Start here to learn how to monitor Azure Load Testing
+++++ Last updated : 04/22/2022++++
+# Monitor Azure Load Testing
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Azure Load Testing.
++
+## What is Azure Monitor?
+Azure Load Testing creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises.
+
+Start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts:
+
+- What is Azure Monitor?
+- Costs associated with monitoring
+- Monitoring data collected in Azure
+- Configuring data collection
+- Standard tools in Azure for analyzing and alerting on monitoring data
+
+The following sections build on this article by describing the specific data gathered for Azure Load Testing. These sections also provide examples for configuring data collection and analyzing this data with Azure tools.
+
+> [!TIP]
+> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor//usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+
+## Monitoring data
++
+Azure Load Testing collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md) for detailed information on logs metrics created by Azure Load Testing.
+++
+## Collection and routing
++
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
++
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Load Testing are listed in [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md#resource-logs).
++
+The following sections describe which types of logs you can collect.
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). You can find the schema for Azure Load Testing resource logs in the [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md#resource-logs).
+
+The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of resource logs types collected for Azure Load Testing, see [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md#resource-logs).
++
+### Sample Kusto queries
++
+> [!IMPORTANT]
+> When you select **Logs** from the Azure Load Testing menu, Log Analytics is opened with the query scope set to the current [service name]. This means that log queries will only include data from that resource. If you want to run a query that includes data from other [service resource] or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
++
+Following are queries that you can use to help you monitor your Azure Load Testing resources:
+- Retrieve the list of tests:
+```
+AzureLoadTestingOperation
+| where OperationId == "Test_CreateOrUpdateTest"
+| where HttpStatusCode == 201
+| summarize count() by _ResourceId
+
+```
+- Retrieve the list of test runs:
+```
+AzureLoadTestingOperation
+| where OperationId == "TestRun_CreateAndUpdateTest"
+| where HttpStatusCode == 201
+| summarize count() by _ResourceId
+
+```
++
+## Next steps
++
+- See [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md) for a reference of the metrics, logs, and other important values created by Azure Load Testing.
+
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
logic-apps Set Up Zone Redundancy Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md
During preview, the following considerations apply:
* Brazil South * Canada Central * France Central
+ * Japan East
+ * South Central US
+ * UK South
* Azure Logic Apps currently supports the option to enable availability zones *only for new Consumption logic app workflows* that run in multi-tenant Azure Logic Apps.
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
ms.suite: integration Previously updated : 03/01/2022 Last updated : 05/05/2022 # Reference guide to workflow expression functions in Azure Logic Apps and Power Automate
For the full reference about each function, see the
| Date or time function | Task | | | - |
-| [addDays](../logic-apps/workflow-definition-language-functions-reference.md#addDays) | Add a number of days to a timestamp. |
-| [addHours](../logic-apps/workflow-definition-language-functions-reference.md#addHours) | Add a number of hours to a timestamp. |
-| [addMinutes](../logic-apps/workflow-definition-language-functions-reference.md#addMinutes) | Add a number of minutes to a timestamp. |
-| [addSeconds](../logic-apps/workflow-definition-language-functions-reference.md#addSeconds) | Add a number of seconds to a timestamp. |
-| [addToTime](../logic-apps/workflow-definition-language-functions-reference.md#addToTime) | Add a number of time units to a timestamp. See also [getFutureTime](../logic-apps/workflow-definition-language-functions-reference.md#getFutureTime). |
+| [addDays](../logic-apps/workflow-definition-language-functions-reference.md#addDays) | Add days to a timestamp. |
+| [addHours](../logic-apps/workflow-definition-language-functions-reference.md#addHours) | Add hours to a timestamp. |
+| [addMinutes](../logic-apps/workflow-definition-language-functions-reference.md#addMinutes) | Add minutes to a timestamp. |
+| [addSeconds](../logic-apps/workflow-definition-language-functions-reference.md#addSeconds) | Add seconds to a timestamp. |
+| [addToTime](../logic-apps/workflow-definition-language-functions-reference.md#addToTime) | Add specified time units to a timestamp. See also [getFutureTime](../logic-apps/workflow-definition-language-functions-reference.md#getFutureTime). |
| [convertFromUtc](../logic-apps/workflow-definition-language-functions-reference.md#convertFromUtc) | Convert a timestamp from Universal Time Coordinated (UTC) to the target time zone. | | [convertTimeZone](../logic-apps/workflow-definition-language-functions-reference.md#convertTimeZone) | Convert a timestamp from the source time zone to the target time zone. | | [convertToUtc](../logic-apps/workflow-definition-language-functions-reference.md#convertToUtc) | Convert a timestamp from the source time zone to Universal Time Coordinated (UTC). |
And returns this result: `2.5`
### addDays
-Add a number of days to a timestamp.
+Add days to a timestamp.
``` addDays('<timestamp>', <days>, '<format>'?)
addDays('<timestamp>', <days>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*days*> | Yes | Integer | The positive or negative number of days to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
And returns this result: `"2018-03-10T00:00:00.0000000Z"`
### addHours
-Add a number of hours to a timestamp.
+Add hours to a timestamp.
``` addHours('<timestamp>', <hours>, '<format>'?)
addHours('<timestamp>', <hours>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*hours*> | Yes | Integer | The positive or negative number of hours to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
And returns this result: `"2018-03-15T10:00:00.0000000Z"`
### addMinutes
-Add a number of minutes to a timestamp.
+Add minutes to a timestamp.
``` addMinutes('<timestamp>', <minutes>, '<format>'?)
addMinutes('<timestamp>', <minutes>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*minutes*> | Yes | Integer | The positive or negative number of minutes to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
Here's the updated JSON object:
### addSeconds
-Add a number of seconds to a timestamp.
+Add seconds to a timestamp.
``` addSeconds('<timestamp>', <seconds>, '<format>'?)
addSeconds('<timestamp>', <seconds>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*seconds*> | Yes | Integer | The positive or negative number of seconds to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
And returns this result: `"2018-03-15T00:00:25.0000000Z"`
### addToTime
-Add a number of time units to a timestamp.
-See also [getFutureTime()](#getFutureTime).
+Add the specified time units to a timestamp. See also [getFutureTime()](#getFutureTime).
``` addToTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
addToTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*interval*> | Yes | Integer | The number of specified time units to add | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
convertFromUtc('<timestamp>', '<destinationTimeZone>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, please review: [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones). |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, review [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones). |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
convertTimeZone('<timestamp>', '<sourceTimeZone>', '<destinationTimeZone>', '<fo
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. | | <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
convertToUtc('<timestamp>', '<sourceTimeZone>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
string(add(decimal('1.2345678912312131'), decimal('1.2345678912312131'))) // Ret
### decodeBase64 (deprecated)
-This function is deprecated, so please use [base64ToString()](#base64ToString) instead.
+This function is deprecated, so use [base64ToString()](#base64ToString) instead.
<a name="decodeDataUri"></a>
div(<dividend>, <divisor>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*dividend*> | Yes | Integer or Float | The number to divide by the *divisor* |
-| <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*, but cannot be 0 |
+| <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*, but can't be zero |
||||| | Return value | Type | Description |
formatDateTime('<timestamp>', '<format>'?, '<locale>'?)
| Parameter | Required | Type | Description | |--|-||-| | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
-| <*locale*> | No | String | The locale to use. <br><br>- If unspecified, the current behavior is unchanged. <br><br>- If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*locale*> | No | String | The locale to use. If unspecified, the value is `en-us`. If *locale* isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
formatNumber(<number>, <format>, <locale>?)
| | -- | - | -- | | <*number*> | Yes | Integer or Double | The value that you want to format. | | <*format*> | Yes | String | A composite format string that specifies the format that you want to use. For the supported numeric format strings, see [Standard numeric format strings](/dotnet/standard/base-types/standard-numeric-format-strings), which are supported by `number.ToString(<format>, <locale>)`. |
-| <*locale*> | No | String | The locale to use as supported by `number.ToString(<format>, <locale>)`. If not specified, the default value is `en-us`. If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
+| <*locale*> | No | String | The locale to use as supported by `number.ToString(<format>, <locale>)`. If unspecified, the value is `en-us`. If *locale* isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
getFutureTime(<interval>, <timeUnit>, <format>?)
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*interval*> | Yes | Integer | The number of specified time units to add |
+| <*interval*> | Yes | Integer | The number of time units to add |
| <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" | | <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. | |||||
iterationIndexes('<loopName>')
*Example*
-This example creates a counter variable and increments that variable by one during each iteration in an Until loop until the counter value reaches five. The example also creates a variable that tracks the current index for each iteration. In the Until loop, during each iteration, the example increments the counter and then assigns the counter value to the current index value and then increments the counter. While in the loop, this example references the current iteration index by using the `iterationIndexes` function:
+This example creates a counter variable and increments that variable by one during each iteration in an Until loop until the counter value reaches five. The example also creates a variable that tracks the current index for each iteration. During each iteration in the Until loop, the example increments the counter value and then assigns the counter value to the current index value and then increments the counter value. While in the loop, this example references the current iteration index by using the `iterationIndexes` function:
`iterationIndexes('Until_Max_Increment')`
mod(<dividend>, <divisor>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*dividend*> | Yes | Integer or Float | The number to divide by the *divisor* |
-| <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*, but cannot be 0. |
+| <*divisor*> | Yes | Integer or Float | The number that divides the *dividend*, but can't be zero |
||||| | Return value | Type | Description |
nthIndexOf('<text>', '<searchText>', <occurrence>)
|--|-||-| | <*text*> | Yes | String | The string that contains the substring to find | | <*searchText*> | Yes | String | The substring to find |
-| <*ocurrence*> | Yes | Integer | A number that specifies the *n*th occurrence of the substring to find. If *ocurrence* is negative, starts searching from the end. |
+| <*occurrence*> | Yes | Integer | A number that specifies the *n*th occurrence of the substring to find. If *occurrence* is negative, start searching from the end. |
||||| | Return value | Type | Description |
parseDateTime('<timestamp>', '<locale>'?, '<format>'?)
| Parameter | Required | Type | Description | |--|-||-| | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*locale*> | No | String | The locale to use. <br><br>If not specified, default locale is used. <br><br>If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br> If not specified, the parsing will be attempted with multiple compatible with the provided locale. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*locale*> | No | String | The locale to use. <br><br>If not specified, the default locale is `en-us`. <br><br>If *locale* isn't a valid value, an error is generated. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. If the format isn't specified, attempt parsing with multiple formats that are compatible with the provided locale. If the format isn't a valid value, an error is generated. |
|||| | Return value | Type | Description |
Here's the updated JSON object:
### result
-Return the results from the top-level actions in the specified scoped action, such as a `For_each`, `Until`, or `Scope` action. The `result()` function accepts a single parameter, which is the scope's name, and returns an array that contains information from the first-level actions in that scope. These action objects include the same attributes as those returned by the `actions()` function, such as the action's start time, end time, status, inputs, correlation IDs, and outputs.
+Return the results from the top-level actions in the specified scoped action, such as a `For_each`, `Until`, or `Scope` action. The `result()` function accepts a single parameter, which is the scope's name, and returns an array that contains information from the first-level actions in that scope. These action objects include the same attributes as the attributes returned by the `actions()` function, such as the action's start time, end time, status, inputs, correlation IDs, and outputs.
> [!NOTE] > This function returns information *only* from the first-level actions in the scoped action and not from deeper nested actions such as switch or condition actions.
startOfDay('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
startOfHour('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
startOfMonth('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
subtractFromTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*interval*> | Yes | Integer | The number of specified time units to subtract | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
Optionally, you can specify a different format with the <*format*> parameter.
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*format*> | No | String | A numeric format string that is either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated. |
||||| | Return value | Type | Description |
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## April 29, 2022
+[Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview) and [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Version `22.04.27`
+
+Main changes:
+
+- `Plotly` and `summarytools` R studio extensions runtime import fix.
+- `Cudatoolkit` and `CUDNN` upgraded to 13.1 and 2.8.1 respectively.
+- Fix Python 3.8 - AzureML notebook run, pinned `matplotlib` to 3.2.1 and cycler to 0.11.0 packages in `Azureml_py38` environment.
+ ## April 26, 2022 [Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
-Version: 22.04.21
+Version: `22.04.21`
Main changes: -- Plotly R studio extension patch.-- Update Rscript env path to support latest R studio version 4.1.3.
+- `Plotly` R studio extension patch.
+- Update `Rscript` env path to support latest R studio version 4.1.3.
## April 14, 2022 New DSVM offering for [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview) is currently live in the marketplace.
-Version: 22.04.05
+Version: `22.04.05`
## April 04, 2022 New Image for [Data Science VM ΓÇô Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=overview)
-Version: 22.04.01
+Version: `22.04.01`
Main changes:
Main changes:
## March 18, 2022 [Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
-Version: 22.03.09
+Version: `22.03.09`
Main changes:
Main changes:
[Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
-Version: 21.12.03
+Version: `21.12.03`
Windows 2019 DSVM will now be supported under publisher: microsoft-dsvm, offer ID: dsvm-win-2019, plan ID/SKU ID: winserver-2019
Users using Azure Resource Manager (ARM) template / virtual machine scale set to
New image for [Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview).
-Version: 21.12.03
+Version: `21.12.03`
Main changes:
Main changes:
New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview).
-Version: 21.11.04
+Version: `21.11.04`
Main changes: * Changed .NET Framework to version 3.1.414
Main changes:
New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview).
-Version: 21.10.07
+Version: `21.10.07`
Main changes: - Changed pytorch to version 1.9.1
Main changes:
New image for [Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview).
-Version: 21.08.11
+Version: `21.08.11`
Main changes:
Main changes:
New image for [Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview).
-Version: 21.06.22
+Version: `21.06.22`
Main changes:
Main changes:
New image for [Ubuntu 18.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-1804?tab=Overview).
-Version: 21.06.01
+Version: `21.06.01`
Main changes are:
Removed several icons from desktop.
New image for [Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview).
-Version: 21.05.22
+Version: `21.05.22`
Selected version updates are: - CUDA 11.1
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
This guide assumes you don't have a managed identity, a storage account or an on
cd azureml-examples/cli ```
+## Limitations
+
+* The identity for an endpoint is immutable. During endpoint creation, you can associate it with a system-assigned identity (default) or a user-assigned identity. You can't change the identity after the endpoint has been created.
+ ## Define configuration YAML file for deployment To deploy an online endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-online.md) document.
Then, upload file in container.
The following code creates an online endpoint without specifying a deployment.
+> [!WARNING]
+> The identity for an endpoint is immutable. During endpoint creation, you can associate it with a system-assigned identity (default) or a user-assigned identity. You can't change the identity after the endpoint has been created.
+ # [System-assigned managed identity](#tab/system-identity) When you create an online endpoint, a system-assigned managed identity is created for the endpoint by default.
->[!IMPORTANT]
-> System assigned managed identities are immutable and can't be changed once created.
- ::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="create_endpoint" ::: Check the status of the endpoint with the following.
marketplace Customer Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/customer-dashboard.md
Previously updated : 04/26/2022 Last updated : 05/05/2022 # Customers dashboard in commercial marketplace analytics
Submit feedback about the report/dashboard along with an optional screenshot.
### Month range
+A month range selection is at the top-right corner of each page.
+ :::image type="content" source="media/customer-dashboard/month-range.png" alt-text="Screenshot showing the duration filter menu option on the Insights screen of the Customers dashboard.":::
-A month range selection is at the top-right corner of each page. Customize the output of graphs by selecting a month range based on the last **six** or **12** months, or by selecting a **custom** month range with a maximum duration of 12 months. The default month range is six months.
+ Customize the output of graphs by selecting a month range based on the last **six** or **12** months, or by selecting a **custom** month range with a maximum duration of 12 months. The default month range is six months.
### Customer page dashboard filters
marketplace Customer Retention Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/customer-retention-dashboard.md
Previously updated : 04/28/2022 Last updated : 05/05/2022 # Customer retention dashboard
Provide your feedback in the dialog box that appears.
Select the **Category** for an offer from the **Category** list, and then select one of your offers from the **Marketplace offer** list. > [!NOTE] > The analysis is currently available at offer level, not at offer plan level.
migrate Deploy Appliance Script Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script-government.md
You can use the script to deploy the Azure Migrate appliance on an existing phys
### Download the script
-1. In **Migration Goals** > **Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment**, click **Discover**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, click **Discover**.
2. In **Discover server** > **Are your servers virtualized?**, select **Yes, with VMware vSphere hypervisor**. 3. Provide an appliance name and generate a project key in the portal. 3. Click **Download**, to download the zipped file.
Make sure that the appliance can connect to Azure URLs for [government clouds](m
### Download the script
-1. In **Migration Goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, click **Discover**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, click **Discover**.
2. In **Discover servers** > **Are your servers virtualized?**, select **Yes, with Hyper-V**. 3. Provide an appliance name and generate a project key in the portal. 3. Click **Download**, to download the zipped file.
Make sure that the appliance can connect to Azure URLs for [government clouds](m
### Download the script
-1. In **Migration Goals** > **Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment**, click **Discover**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, click **Discover**.
2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen etc.)**. 3. Click **Download**, to download the zipped file.
migrate How To Set Up Appliance Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-hyper-v.md
To set up the appliance using a VHD template:
### Generate the project key
-1. In **Migration Goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
2. In **Discover servers** > **Are your servers virtualized?**, select **Yes, with Hyper-V**. 3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of servers on Hyper-V. The name should be alphanumeric with 14 characters or fewer. 1. Click on **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
Click on **Start discovery**, to kick off server discovery from the successfully
After discovery finishes, you can verify that the servers appear in the portal. 1. Open the Azure Migrate dashboard.
-2. In **Azure Migrate - Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
+2. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
## Next steps
migrate How To Set Up Appliance Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-physical.md
To set up the appliance you:
### Generate the project key
-1. In **Migration Goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**. 3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of physical or virtual servers. The name should be alphanumeric with 14 characters or fewer. 1. Click on **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
Click on **Start discovery**, to kick off discovery of the successfully validate
After discovery finishes, you can verify that the servers appear in the portal. 1. Open the Azure Migrate dashboard.
-2. In **Azure Migrate - Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
+2. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
## Next steps
migrate How To Set Up Appliance Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-vmware.md
To set up the appliance by using an OVA template, you'll complete these steps, w
#### Generate the project key
-1. In **Migration Goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > **Discover**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > **Discover**.
1. In **Discover servers**, select **Are your servers virtualized?** > **Yes, with VMware vSphere hypervisor**. 1. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you'll set up to discover servers in your VMware environment. The name should be alphanumeric and 14 characters or fewer. 1. To start creating the required Azure resources, select **Generate key**. Don't close the **Discover** pane while the resources are being created.
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
To set up the appliance you:
### 1. Generate the project key
-1. In **Migration Goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**. 3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of physical or virtual servers. The name should be alphanumeric with 14 characters or fewer. 1. Click on **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
Click on **Start discovery**, to kick off discovery of the successfully validate
After discovery finishes, you can verify that the servers appear in the portal. 1. Open the Azure Migrate dashboard.
-2. In **Azure Migrate - Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
+2. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**.
## Next steps
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
To set up the appliance, you:
### 1. Generate the project key
-1. In **Migration Goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
-2. In **Discover servers** > **Are your servers virtualized?** > **Physical or other (AWS, GCP, Xen, etc.)**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
+2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of your GCP virtual servers. The name should be alphanumeric with 14 characters or fewer. 4. Click **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources. 5. After the successful creation of the Azure resources, a **project key** is generated.
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
This tutorial sets up the appliance on a server running in Hyper-V environment,
### 1. Generate the project key
-1. In **Migration Goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
2. In **Discover Servers** > **Are your servers virtualized?**, select **Yes, with Hyper-V**. 3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of servers. The name should be alphanumeric with 14 characters or fewer. 1. Click on **Generate key** to start the creation of the required Azure resources. Do not close the Discover server page during the creation of resources.
migrate Tutorial Discover Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md
Download the CSV template and add server information to it.
### Download the template
-1. In **Migration Goals** > **Servers** > **Azure Migrate: Server Assessment**, select **Discover**.
+1. In **Migration goals** > **Servers** > **Azure Migrate: Server Assessment**, select **Discover**.
2. In **Discover machines**, select **Import using CSV**. 3. Select **Download** to download the CSV template. Alternatively, you can [download it directly](https://go.microsoft.com/fwlink/?linkid=2109031).
Asianux 3<br/>Asianux 4<br/>Asianux 5<br/>CentOS<br/>CentOS 4/5<br/>CoreOS Linux
## Assessment considerations -- If you import serves by using a CSV file and creating an assessment with sizing criteria as "performance-based":
+- If you import servers by using a CSV file and creating an assessment with sizing criteria as "performance-based":
- For Azure VM assessment, the performance values you specify (CPU utilization, Memory utilization, Disk IOPS and throughput) are used if you choose performance-based sizing. You will not be able to provide performance history and percentile information. - For Azure VMware Solution assessment, the performance values you specify (CPU utilization, Memory utilization, Storage in use(GB)) are used if you choose performance-based sizing. You will not be able to provide performance history and percentile information. - To get an accurate OS suitability/readiness in Azure VM and Azure VMware Solution assessment, please enter the Operating system version and architecture in the respective columns.
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
To set up the appliance, you:
### 1. Generate the project key
-1. In **Migration Goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**.
2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**. 3. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you will set up for discovery of physical or virtual servers. The name should be alphanumeric with 14 characters or fewer. 1. Click on **Generate key** to start the creation of the required Azure resources. Do not close the Discover servers page during the creation of resources.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
To set up the appliance by using an OVA template, you'll complete these steps, w
#### Generate the project key
-1. In **Migration Goals**, select **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > **Discover**.
+1. In **Migration goals**, select **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > **Discover**.
1. In **Discover servers**, select **Are your servers virtualized?** > **Yes, with VMware vSphere hypervisor**. 1. In **1:Generate project key**, provide a name for the Azure Migrate appliance that you'll set up to discover servers in your VMware environment. The name should be alphanumeric and 14 characters or fewer. 1. To start creating the required Azure resources, select **Generate key**. Don't close the **Discover** pane while the resources are being created.
migrate Tutorial Migrate Vmware Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-agent.md
This procedure describes how to set up the appliance with a downloaded Open Virt
Download the template as follows:
-1. In the Azure Migrate project, select **Servers** under **Migration Goals**.
+1. In the Azure Migrate project, select **Servers, databases and web apps** under **Migration goals**.
2. In **Azure Migrate - Servers** > **Azure Migrate: Server Migration**, click **Discover**. ![Discover VMs](./media/tutorial-migrate-vmware-agent/migrate-discover.png)
When delta replication begins, you can run a test migration for the VMs, before
Do a test migration as follows:
-1. In **Migration goals** > **Servers** > **Azure Migrate: Server Migration**, click **Test migrated servers**.
+1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Server Migration**, click **Test migrated servers**.
![Test migrated servers](./media/tutorial-migrate-vmware-agent/test-migrated-servers.png)
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md
Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-
|Burstable (B1s)|1|1|134217728|33554432|134217728| |Burstable (B1ms)|1|2|536870912|134217728|536870912| |Burstable|2|4|2147483648|134217728|2147483648|
-|General Purpose|2|8|6442450944|134217728|6442450944|
+|General Purpose|2|8|5368709120|134217728|5368709120|
|General Purpose|4|16|12884901888|134217728|12884901888| |General Purpose|8|32|25769803776|134217728|25769803776| |General Purpose|16|64|51539607552|134217728|51539607552|
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[ANS Group UK](https://www.ans.co.uk/)|[Azure Managed Services + ANS Glass 10 week implementation](https://azuremarketplace.microsoft.com/marketplace/apps/ans_group.glasssaas?tab=Overview)|[ExpressRoute & connectivity: Two week Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/ans_group.ans_er)|[Azure Virtual WAN + Fortinet: Two week Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/ans_group.ans_vw)||| |[Aryaka Networks](https://www.aryaka.com/azure-msp-vwan-managed-service-provider-launch-partner-aryaka/)||[Aryaka Azure Connect](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aryaka.cloudconnect_azure_19?tab=Overview)|[Aryaka Managed SD-WAN for Azure Networking Virtual](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aryaka.aryaka_azure_virtual_wan?tab=Overview) | | | |[AXESDN](https://www.axesdn.com/en/azure-msp.html)||[AXESDN Managed Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/1584591601184.axesdn_managed_azure_expressroute?tab=Overview)|[AXESDN Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/1584591601184.axesdn_managed_azure_virtualwan?tab=Overview) | | |
-|[BT](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)|[Network Transformation Consulting: 1-Hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/bt-americas-inc.network-transformation-consulting);[BT Cloud Connect Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-001?tab=Overview)|[BT Cloud Connect Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-003?tab=Overview)|[BT Cloud Connect Azure VWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-002?tab=Overview)|||
+|[BT](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)|[Network Transformation Consulting: 1-Hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/bt-americas-inc.network-transformation-consulting);[BT Cloud Connect Azure](https://azuremarketplace.microsoft.com/marketplace/consulting-services/btenterprise-hq.bt_caf_vwan_landingzone)|[BT Cloud Connect Azure ExpressRoute](https://azuremarketplace.microsoft.com/marketplace/consulting-services/btenterprise-hq.bt_caf_vwan_landingzone)|[BT Cloud Connect Azure VWAN](https://azuremarketplace.microsoft.com/marketplace/consulting-services/btenterprise-hq.bt_caf_vwan_landingzone)|||
|[BUI](https://www.bui.co.za/)|[a2zManaged Cloud Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.a2zmanagement?tab=Overview)||[BUI Managed Azure vWAN using VMware SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_managed_vwan?tab=Overview)||[BUI CyberSoC](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.buicybersoc_msp?tab=Overview)| |[Coevolve](https://www.coevolve.com/services/azure-networking-services/)|||[Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.coevolve-managed-azure-vwan?tab=Overview);[Managed VMware SD-WAN Virtual Edge](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.managed-vmware-sdwan-edge?tab=Overview)||| |[Colt](https://cloud.telekom.de/de/infrastruktur/microsoft-azure/azure-networking)|[Network optimization on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)|||||
openshift Howto Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-custom-dns.md
Title: Configure custom DNS resources in an Azure Red Hat OpenShift (ARO) cluster description: Discover how to add a custom DNS server on all of your nodes in Azure Red Hat OpenShift (ARO).-+
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 03/21/2022 Last updated : 05/4/2022
One advantage of running your workload in Azure is global reach. The flexible se
| UK West | :heavy_check_mark: | :x: | :x: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :x: |
-| West US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| West US 2 | :heavy_check_mark: | :x: $ | :x: |
| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :x: | $ New Zone-redundant high availability deployments are temporarily blocked in these regions. Already provisioned HA servers are fully supported.
purview Catalog Private Link Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-faqs.md
Title: Microsoft Purview private endpoints frequently asked questions (FAQ)
-description: This article answers frequently asked questions about Microsoft Purview private endpoints.
+ Title: Microsoft Purview private endpoints and managed vnets frequently asked questions (FAQ)
+description: This article answers frequently asked questions about Microsoft Purview private endpoints and managed vnets.
Previously updated : 05/11/2021
-# Customer intent: As a Microsoft Purview admin, I want to set up private endpoints for my Microsoft Purview account for secure access.
Last updated : 05/05/2022
+# Customer intent: As a Microsoft Purview admin, I want to set up private endpoints and managed vnets for my Microsoft Purview account for secure access or ingestion.
-# FAQ about Microsoft Purview private endpoints
+# FAQ about Microsoft Purview private endpoints and Managed VNets
-This article answers common questions that customers and field teams often ask about Microsoft Purview network configurations by using [Azure Private Link](../private-link/private-link-overview.md). It's intended to clarify questions about Microsoft Purview firewall settings, private endpoints, DNS configuration, and related configurations.
+This article answers common questions that customers and field teams often ask about Microsoft Purview network configurations by using [Azure Private Link](../private-link/private-link-overview.md) or [Microsoft Purview Managed VNets](./catalog-managed-vnet.md). It's intended to clarify questions about Microsoft Purview firewall settings, private endpoints, DNS configuration, and related configurations.
To set up Microsoft Purview by using Private Link, see [Use private endpoints for your Microsoft Purview account](./catalog-private-link.md).
+To configure Managed VNets for a Microsoft Purview account, see [Use a Managed VNet with your Microsoft Purview account](./catalog-managed-vnet.md)
## Common questions Check out the answers to the following common questions.
+### When should I use a self-hosted integration runtime or a Managed IR?
+
+Use a Managed IR if:
+- Your Microsoft Purview account is deployed in one of the [supported regions for Managed VNets](catalog-managed-vnet.md#supported-regions).
+- You are plannig to scan any of the [supported data sources](catalog-managed-vnet.md#supported-data-sources) by Managed IR.
+
+Use a self-hosted integration runtime if:
+- You are planning to scan any Azure IaaS, SaaS on-premises data sources.
+- Managed VNet is not available in the region where your Microsoft Purview account is deployed.
+
+### Can I use both self-hosted integration runtime and Managed IR inside a Microsoft Purview account?
+
+Yes. You can use one or all of the runtime options in a single Microsoft Purview account: Azure IR, Managed IR and self-hosted integration runtime. You can use only one runtime option in a single scan.
+ ### What's the purpose of deploying the Microsoft Purview account private endpoint? The Microsoft Purview account private endpoint is used to add another layer of security by enabling scenarios where only client calls that originate from within the virtual network are allowed to access the account. This private endpoint is also a prerequisite for the portal private endpoint.
Microsoft Purview can scan data sources in Azure or an on-premises environment b
- **Queue** is linked to a Microsoft Purview managed storage account. - **namespace** is linked to a Microsoft Purview managed event hub namespace.
-### Can I scan data through a public endpoint if a private endpoint is enabled on my Microsoft Purview account?
+### Can I scan a data source through a public endpoint if a private endpoint is enabled on my Microsoft Purview account?
Yes. Data sources that aren't connected through a private endpoint can be scanned by using a public endpoint while Microsoft Purview is configured to use a private endpoint.
-### Can I scan data through a service endpoint if a private endpoint is enabled?
+### Can I scan a data source through a service endpoint if a private endpoint is enabled?
Yes. Data sources that aren't connected through a private endpoint can be scanned by using a service endpoint while Microsoft Purview is configured to use a private endpoint. Make sure you enable **Allow trusted Microsoft services** to access the resources inside the service endpoint configuration of the data source resource in Azure. For example, if you're going to scan Azure Blob Storage in which the firewalls and virtual networks settings are set to **selected networks**, make sure the **Allow trusted Microsoft services to access this storage account** checkbox is selected as an exception.
+### Can I scan a data source through a public endpoint using Managed IR?
+
+Yes. If data source is supported by Managed VNet. As a prerequisite, you need to deploy a managed private endpoint for the data source.
+
+### Can I scan a data source through a service endpoint using Managed IR?
+
+Yes. If data source is supported by Managed VNet. As a prerequisite, you need to deploy a managed private endpoint for the data source.
+ ### Can I access the Microsoft Purview governance portal from a public network if Public network access is set to Deny in Microsoft Purview account networking? No. Connecting to Microsoft Purview from a public endpoint where **Public network access** is set to **Deny** results in the following error message:
No. As protected resources, access to the Microsoft Purview managed storage acco
To read more about Azure deny assignment, see [Understand Azure deny assignments](../role-based-access-control/deny-assignments.md).
-### What are the supported authentication types when you use a private endpoint?
+### What are the supported authentication types when I use a private endpoint?
+
+Depends on authentication type supported by the data source type such as SQL Authentication, Windows Authentication, Basic Authentication, Service Principal, etc. stored in Azure Key Vault. MSI cannot be used.
-Azure Key Vault or Service Principal.
+### What are the supported authentication types when I use Managed IR?
+Depends on authentication type supported by the data source type such as SQL Authentication, Windows Authentication, Basic Authentication, Service Principal, etc. stored in Azure Key Vault or MSI.
### What private DNS zones are required for Microsoft Purview for a private endpoint?
You're also required to set up a [virtual network link](../dns/private-dns-virtu
No. You have to deploy and register a self-hosted integration runtime to scan data by using private connectivity. Azure Key Vault or Service Principal must be used as the authentication method to data sources.
+### Can I use Managed IR to scan data sources through a private endpoint?
+
+If you are planning to use Managed IR to scan any of the supported data sources, the data source requires a managed private endpoint created inside Microsoft Purview Managed VNet. For more information, see [Microsoft Purview Managed VNets](./catalog-managed-vnet.md).
+ ### What are the outbound ports and firewall requirements for virtual machines with self-hosted integration runtime for Microsoft Purview when you use a private endpoint? The VMs in which self-hosted integration runtime is deployed must have outbound access to Azure endpoints and a Microsoft Purview private IP address through port 443.
The VMs in which self-hosted integration runtime is deployed must have outbound
No. However, it's expected that the virtual machine running self-hosted integration runtime can connect to your instance of Microsoft Purview through an internal IP address by using port 443. Use common troubleshooting tools for name resolution and connectivity testing, such as nslookup.exe and Test-NetConnection.
+### Do I still need to deploy private endpoints for my Microsoft Purview account if I am using Managed VNet?
+
+At least one account and portal private endpoints are required, if public access in Microsoft Purview account is set to **deny**.
+At least one account, portal and ingestion private endpoint are required, if public access in Microsoft Purview account is set to **deny** and you are planning to scan additional data sources using a self-hosted integration runtime.
+
+### What inbound and outbound communications are allowed through public endpoint for Microsoft Purview Managed VNets?
+No inbound communication is allowed into a Managed VNet from public network.
+All ports are opened for outbound communications.
+ ### Why do I receive the following error message when I try to launch Microsoft Purview governance portal from my machine? "This Microsoft Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Microsoft Purview account's private endpoint."
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
To create and run a new scan, do the following:
1. Create a user account in Azure Active Directory tenant and assign the user to Azure Active Directory role, **Power BI Administrator**. Take note of username and login to change the password.
-3. Assign proper Power BI license to the user.
+1. Assign proper Power BI license to the user.
-2. Navigate to your Azure key vault.
+1. Navigate to your Azure key vault.
-3. Select **Settings** > **Secrets** and select **+ Generate/Import**.
+1. Select **Settings** > **Secrets** and select **+ Generate/Import**.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault.png" alt-text="Screenshot how to navigate to Azure Key Vault.":::
-4. Enter a name for the secret and for **Value**, type the newly created password for the Azure AD user. Select **Create** to complete.
+1. Enter a name for the secret and for **Value**, type the newly created password for the Azure AD user. Select **Create** to complete.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-key-vault-secret.png" alt-text="Screenshot how to generate an Azure Key Vault secret.":::
-5. If your key vault is not connected to Microsoft Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. If your key vault is not connected to Microsoft Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
-6. Create an App Registration in your Azure Active Directory tenant. Provide a web URL in the **Redirect URI**. Take note of Client ID(App ID).
+1. Create an App Registration in your Azure Active Directory tenant. Provide a web URL in the **Redirect URI**. Take note of Client ID(App ID).
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle.":::
-7. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions and grant admin consent for the tenant:
+1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions and grant admin consent for the tenant:
- Power BI Service Tenant.Read.All - Microsoft Graph openid
To create and run a new scan, do the following:
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
-8. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu.
+1. In the Microsoft Purview Studio, navigate to the **Data map** in the left menu.
-9. Navigate to **Sources**.
+1. Navigate to **Sources**.
-10. Select the registered Power BI source.
+1. Select the registered Power BI source.
-11. Select **+ New scan**.
+1. Select **+ New scan**.
-12. Give your scan a name. Then select the option to include or exclude the personal workspaces.
+1. Give your scan a name. Then select the option to include or exclude the personal workspaces.
>[!Note] > Switching the configuration of a scan to include or exclude a personal workspace will trigger a full scan of PowerBI source.
-13. Select your self-hosted integration runtime from the drop-down list.
+1. Select your self-hosted integration runtime from the drop-down list.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-shir.png" alt-text="Image showing Power BI scan setup using SHIR for same tenant.":::
-14. For the **Credential**, select **Delegated authentication** and click **+ New** to create a new credential.
+1. For the **Credential**, select **Delegated authentication** and click **+ New** to create a new credential.
-15. Create a new credential and provide required parameters:
+1. Create a new credential and provide required parameters:
- **Name**: Provide a unique name for credential - **Client ID**: Use Service Principal Client ID (App ID) you created earlier - **User name**: Provide the username of Power BI Administrator you created earlier - **Password**: Select the appropriate Key vault connection and the **Secret name** where the Power BI account password was saved earlier.
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-delegated-authentication.png" alt-text="Screenshot of the new credential menu, showing Power B I credential with all required values supplied.":::
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-delegated-authentication.png" alt-text="Image showing Power BI scan setup using Delegated authentication.":::
-
-16. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
+1. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem
1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required. 2. Assets (+ lineage) - Failed status means the Microsoft Purview - Power BI authorization has failed. Make sure the Microsoft Purview managed identity is added to the security group associated in Power BI admin portal. 3. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata** :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-test-connection-status-report.png" alt-text="Screenshot of test connection status report page.":::
-17. Set up a scan trigger. Your options are **Recurring**, and **Once**.
+1. Set up a scan trigger. Your options are **Recurring**, and **Once**.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Microsoft Purview scan scheduler.":::
-18. On **Review new scan**, select **Save and run** to launch your scan.
+1. On **Review new scan**, select **Save and run** to launch your scan.
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/save-run-power-bi-scan.png" alt-text="Screenshot of Save and run Power BI source.":::
sentinel Authentication Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/authentication-normalization-schema.md
For more information about normalization in Microsoft Sentinel, see [Normalizati
## Parsers
-Deploy ASIM parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM). For more information about ASIM parsers, see the articles [ASIM parsers overview](normalization-parsers-overview.md) and [Use ASIM parsers](normalization-about-parsers.md).
+Deploy ASIM authentication parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM). For more information about ASIM parsers, see the articles [ASIM parsers overview](normalization-parsers-overview.md)..
### Unifying parsers
-To use parsers that unify all ASIM out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the `ImAuthentication` filtering parser or the `ASimAuthentication` parameter-less parser.
+To use parsers that unify all ASIM out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the `imAuthentication` filtering parser or the `ASimAuthentication` parameter-less parser.
-## Source-specific parsers
+### Source-specific parsers
-Microsoft Sentinel provides the following built-in, product-specific authentication event parsers:
--- **Windows sign-ins**
- - Collected using the Log Analytics Agent or Azure Monitor Agent.
- - Collected using either the Security Events connectors to the SecurityEvent table or using the WEF connector to the WindowsEvent table.
- - Reported as Security Events (4624, 4625, 4634, and 4647).
-- **Windows sign-ins** reported by Microsoft 365 Defender for Endpoint, collected using the Microsoft 365 Defender connector.-- **Linux Sign-ins** reported by Microsoft Defender to IoT Endpoint.-- **Azure Active Directory sign-ins**, collected using the Azure Active Directory connector. Separate parsers are provided for regular, Non-Interactive, Managed Identities and Service Principles Sign-ins.-- **AWS sign-ins**, collected using the AWS CloudTrail connector.-- **Okta authentication**, collected using the Okta connector.
+For the list of authentication parsers Microsoft Sentinel provides refer to the [ASIM parsers list](normalization-parsers-list.md#authentication-parsers):
### Add your own normalized parsers
When implementing custom parsers for the Authentication information model, name
- `vimAuthentication<vendor><Product>` for filtering parsers - `ASiAuthentication<vendor><Product>` for parameter-less parsers
-For information on adding the your custom parsers to the unifying parser, refer to [Managing ASIM parsers](normalization-manage-parsers.md).
+For information on adding your custom parsers to the unifying parser, refer to [Managing ASIM parsers](normalization-manage-parsers.md).
### Filtering parser parameters
-The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimized-parsers). While these parsers are optional, they can improve your query performance.
+The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimizing-parsing-using-parameters). While these parsers are optional, they can improve your query performance.
The following filtering parameters are available:
The following filtering parameters are available:
|-|--|-| | **starttime** | datetime | Filter only authentication events that ran at or after this time. | | **endtime** | datetime | Filter only authentication events that finished running at or before this time. |
-| **targetusername_has** | string | Filter only authentication events that has any of the listed user names. |
+| **targetusername_has** | string | Filter only authentication events that have any of the listed user names. |
For example, to filter only authentication events from the last day to a specific user, use:
imAuthentication (targetusername_has = 'johndoe', starttime = ago(1d), endtime=n
```
+> [!TIP]
+> To pass a literal list to parameters that expect a dynamic value, explicitly use a [dynamic literal](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals.md). For example: `dynamic(['192.168.','10.'])`.
+>
++ ## Normalized content Normalized authentication analytic rules are unique as they detect attacks across sources. So, for example, if a user logged in to different, unrelated systems, from different countries, Microsoft Sentinel will now detect this threat.
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| | - | | Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
-| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)|
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Field | Class | Type | Description | ||--||--|
-| <a name="dst"></a>**Dst** | Recommended | String | A unique identifier of the authetication target. <br><br>This field may alias the [TargerDvcId](#targetdvcid), [TargetHostname](#targethostname), [TargetIpAddr](#targetipaddr), [TargetAppId](#targetappid), or [TargetAppName](#targetappname) fields. <br><br>Example: `192.168.12.1` |
+| <a name="dst"></a>**Dst** | Recommended | String | A unique identifier of the authentication target. <br><br>This field may alias the [TargerDvcId](#targetdvcid), [TargetHostname](#targethostname), [TargetIpAddr](#targetipaddr), [TargetAppId](#targetappid), or [TargetAppName](#targetappname) fields. <br><br>Example: `192.168.12.1` |
| <a name="targetappid"></a>**TargetAppId** |Optional | String| The ID of the application to which the authorization is required, often assigned by the reporting device. <br><br>Example: `89162` | |<a name="targetappname"></a>**TargetAppName** |Optional |String |The name of the application to which the authorization is required, including a service, a URL, or a SaaS application. <br><br>Example: `Saleforce` | | **TargetAppType**|Optional |AppType |The type of the application authorizing on behalf of the Actor. For more information, and allowed list of values, see [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).|
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
For more information, see:
- [Discover and deploy Microsoft Sentinel solutions (Public preview)](sentinel-solutions-deploy.md) - [Microsoft Sentinel data connectors](connect-data-sources.md)-- [Advanced Security Information Model (ASIM) parsers (Public preview)](normalization-about-parsers.md)
+- [Advanced Security Information Model (ASIM) parsers (Public preview)](normalization-parsers-overview.md)
- [Visualize collected data](get-visibility.md) - [Create custom analytics rules to detect threats](detect-threats-custom.md) - [Hunt for threats with Microsoft Sentinel](hunting.md)
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-syslog.md
If the instructions on your data connector's page in Microsoft Sentinel indicate
Use the link in the data connector page to deploy your parsers, or follow the instructions from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/ASIM).
-For more information, see [Advanced Security Information Model (ASIM) parsers](normalization-about-parsers.md).
+For more information, see [Advanced Security Information Model (ASIM) parsers](normalization-parsers-overview.md).
## Configure the Log Analytics agent
sentinel Dhcp Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dhcp-normalization-schema.md
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| | - | | Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
-| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)|
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-normalization-schema.md
_Im_DNS | where SrcIpAddr != "127.0.0.1" and EventSubType == "response"
## Parsers
-For more information about ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md) and [Use ASIM parsers](normalization-about-parsers.md).
+For more information about ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md).
### Unifying parsers
-To use parsers that unify all ASIM out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the `_Im_Dns` filtering parser or the `_ASim_Dns` parameter-less parser. You can also use workspace deployed `ImDns` and `ASimDns` parsers. Deploy workspace parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM). For more information, see [built-in ASIM parsers and workspace-deployed parsers](normalization-parsers-overview.md#built-in-asim-parsers-and-workspace-deployed-parsers).
+To use parsers that unify all ASIM out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the `_Im_Dns` filtering parser or the `_ASim_Dns` parameter-less parser. You can also use workspace deployed `ImDns` and `ASimDns` parsers.
### Out-of-the-box, source-specific parsers
-Microsoft Sentinel provides the following out-of-the-box, product-specific DNS parsers:
-
-| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
-| | | |
-|**Microsoft DNS Server**<br>Collected by the DNS connector<br> and the Log Analytics Agent | `_ASim_Dns_MicrosoftOMS` (regular) <br> `_Im_Dns_MicrosoftOMS` (filtering) <br><br> | `ASimDnsMicrosoftOMS` (regular) <br>`vimDnsMicrosoftOMS` (filtering) <br><br> |
-| **Microsoft DNS Server**<br>Collected by NXlog| `_ASim_Dns_MicrosoftNXlog` (regular)<br>`_Im_Dns_MicrosoftNXlog` (filtering)| `ASimDnsMicrosoftNXlog` (regular)<br> `vimDnsMicrosoftNXlog` (filtering)|
-| **Azure Firewall** | `_ASim_Dns_AzureFirewall` (regular)<br> `_Im_Dns_AzureFirewall` (filtering) | `ASimDnsAzureFirewall` (regular)<br>`vimDnsAzureFirewall` (filtering) |
-| **Sysmon for Windows** (event 22)<br> Collected by the Log Analytics Agent<br> or the Azure Monitor Agent,<br>supporting both the<br> `Event` and `WindowsEvent` tables | `_ASim_Dns_MicrosoftSysmon` (regular)<br> `_Im_Dns_MicrosoftSysmon` (filtering) | `ASimDnsMicrosoftSysmon` (regular)<br> `vimDnsMicrosoftSysmon` (filtering) |
-| **Cisco Umbrella** | `_ASim_Dns_CiscoUmbrella` (regular)<br> `_Im_Dns_CiscoUmbrella` (filtering) | `ASimDnsCiscoUmbrella` (regular)<br> `vimDnsCiscoUmbrella` (filtering) |
-| **Infoblox NIOS**<br><br>The InfoBlox parsers<br>require [configuring the relevant sources](normalization-manage-parsers.md#configure-the-sources-relevant-to-a-source-specific-parser).<br> Use `InfobloxNIOS` as the source type. | `_ASim_Dns_InfobloxNIOS` (regular)<br> `_Im_Dns_InfobloxNIOS` (filtering) | `ASimDnsInfobloxNIOS` (regular)<br> `vimDnsInfobloxNIOS` (filtering) |
-| **GCP DNS** | `_ASim_Dns_Gcp` (regular)<br> `_Im_Dns_Gcp` (filtering) | `ASimDnsGcp` (regular)<br> `vimDnsGcp` (filtering) |
-| **Corelight Zeek DNS events** | `_ASim_Dns_CorelightZeek` (regular)<br> `_Im_Dns_CorelightZeek` (filtering) | `ASimDnsCorelightZeek` (regular)<br> `vimDnsCorelightZeek` (filtering) |
-| **Vectra AI** |`_ASim_Dns_VectraIA` (regular)<br> `_Im_Dns_VectraIA` (filtering) | `AsimDnsVectraAI` (regular)<br> `vimDnsVectraAI` (filtering) |
-| **Zscaler ZIA** |`_ASim_Dns_ZscalerZIA` (regular)<br> `_Im_Dns_ZscalerZIA` (filtering) | `AsimDnsZscalerZIA` (regular)<br> `vimDnsSzcalerZIA` (filtering) |
-||||
-
-These parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/azsentinelDNS).
+For the list of the DNS parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#dns-parsers)
### Add your own normalized parsers
When implementing custom parsers for the Dns information model, name your KQL fu
### Filtering parser parameters
-The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimized-parsers). While these parsers are optional, they can improve your query performance.
+The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimizing-parsing-using-parameters). While these parsers are optional, they can improve your query performance.
The following filtering parameters are available:
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| | - | | Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
-| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)|
### DNS-specific fields
sentinel File Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/file-event-normalization-schema.md
For more information about normalization in Microsoft Sentinel, see [Normalizati
## Parsers
-Microsoft Sentinel provides the following built-in, product-specific file event parsers:
--- **Sysmon file activity events** (Events 11, 23, and 26), collected using the Log Analytics Agent or Azure Monitor Agent.-- **Microsoft Office 365 SharePoint and OneDrive events**, collected using the Office Activity connector.-- **Microsoft 365 Defender for Endpoint file events**-- **Azure Storage**, including Blob, File, Queue, and Table Storage.
+For the list of the file activity parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#file-activity-parsers)
To use the unifying parser that unifies all of the built-in parsers, and ensure that your analysis runs across all the configured sources, use imFileEvent as the table name in your query.
-Deploy the [unifying and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelFileEvent).
- ## Add your own normalized parsers - When implementing custom parsers for the File Event information model, name your KQL functions using the following syntax: `imFileEvent<vendor><Product`. Add your KQL function to the `imFileEvent` unifying parser to ensure that any content using the File Event model also uses your new parser.
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| | - | | Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
-| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)|
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
This article describes version 0.2.x of the network normalization schema. [Versi
## Parsers
-For more information about ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md) and [Use ASIM parsers](normalization-about-parsers.md).
+For more information about ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md).
### Unifying parsers
For more information, see [built-in ASIM parsers and workspace-deployed parsers]
### Out-of-the-box, source-specific parsers
-Microsoft Sentinel provides the following out-of-the-box, product-specific Network Session parsers:
-
-| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
-| | | |
-| **AWS VPC logs** collected using the AWS S3 connector |`_ASim_NetworkSession_AWSVPC` (regular)<br> `_Im_NetworkSession_AWSVPC` (filtering) | `ASimNetworkSessionAWSVPC` (regular)<br> `vimNetworkSessionAWSVPC` (filtering) |
-| **Azure Firewall logs** |`_ASim_NetworkSession_AzureFirewall` (regular)<br> `_Im_NetworkSession_AzureFirewall` (filtering) | `ASimNetworkSessionAzureFirewall` (regular)<br> `vimNetworkSessionAzureFirewall` (filtering) |
-| **Azure Monitor VMConnection** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_VMConnection` (regular)<br> `_Im_NetworkSession_VMConnection` (filtering) | `ASimNetworkSessionVMConnection` (regular)<br> `vimNetworkSessionVMConnection` (filtering) |
-| **Azure Network Security Groups (NSG) logs** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_AzureNSG` (regular)<br> `_Im_NetworkSession_AzureNSG` (filtering) | `ASimNetworkSessionAzureNSG` (regular)<br> `vimNetworkSessionAzureNSG` (filtering) |
-| **Microsoft 365 Defender for Endpoint** | `_ASim_NetworkSession_Microsoft365Defender` (regular)<br><br>`_Im_NetworkSession_Microsoft365Defender` (filtering) | `ASimNetworkSessionMicrosoft365Defender` (regular)<br><br> `vimNetworkSessionMicrosoft365Defender` (filtering) |
-| **Microsoft Defender for IoT - Endpoint** |`_ASim_NetworkSession_MD4IoT` (regular)<br><br>`_Im_NetworkSession_MD4IoT` (filtering) | `ASimNetworkSessionMD4IoT` (regular)<br><br> `vimNetworkSessionMD4IoT` (filtering) |
-| **Palo Alto PanOS traffic logs** collected using CEF |`_ASim_NetworkSession_PaloAltoCEF` (regular)<br> `_Im_NetworkSession_PaloAltoCEF` (filtering) | `ASimNetworkSessionPaloAltoCEF` (regular)<br> `vimNetworkSessionPaloAltoCEF` (filtering) |
-| **Sysmon for Linux** (event 3)<br> Collected using the Log Analytics Agent<br> or the Azure Monitor Agent |`_ASim_NetworkSession_LinuxSysmon` (regular)<br><br>`_Im_NetworkSession_LinuxSysmon` (filtering) | `ASimNetworkSessionLinuxSysmon` (regular)<br><br> `vimNetworkSessionLinuxSysmon` (filtering) |
-| **Windows Firewall logs**<br>Collected as Windows events using the Log Analytics Agent (Event table) or Azure Monitor Agent (WindowsEvent table). Supports Windows events 5150 to 5159. |`_ASim_NetworkSession_`<br>`MicrosoftWindowsEventFirewall` (regular)<br><br>`_Im_NetworkSession_`<br>`MicrosoftWindowsEventFirewall` (filtering) | `ASimNetworkSession`<br>`MicrosoftWindowsEventFirewall` (regular)<br><br> `vimNetworkSession`<br>`MicrosoftWindowsEventFirewall` (filtering) |
-| **Zscaler ZIA firewall logs** |`_ASim_NetworkSessionZscalerZIA` (regular)<br> `_Im_NetworkSessionZscalerZIA` (filtering) | `AsimNetworkSessionZscalerZIA` (regular)<br> `vimNetowrkSessionSzcalerZIA` (filtering) |
-
+For the list of the Network Session parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#network-session-parsers)
### Add your own normalized parsers
Refer to the article [Managing ASIM parsers](normalization-manage-parsers.md) to
### Filtering parser parameters
-The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimized-parsers). While these parsers are optional, they can improve your query performance.
+The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimizing-parsing-using-parameters). While these parsers are optional, they can improve your query performance.
The following filtering parameters are available:
The following filtering parameters are available:
| **endtime** | datetime | Filter only network sessions that *started* running at or before this time. | | **srcipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [source IP address field](#srcipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.| | **dstipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [destination IP address field](#dstipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.|
+| **ipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [destination IP address field](#dstipaddr) or [source IP address field](#srcipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.<br><br>The field [ASimMatchingIpAddr](normalization-common-fields.md#asimmatchingipaddr) is set with the one of the values `SrcIpAddr`, `DstIpAddr`, or `Both` to reflect the matching fields or fields. |
| **dstportnum** | Int | Filter only network sessions with the specified destination port number. |
-| **hostname_has_any** | dynamic | Filter only network sessions for which the [destination hostname field](#dsthostname) has any of the values listed. |
+| **hostname_has_any** | dynamic | Filter only network sessions for which the [destination hostname field](#dsthostname) has any of the values listed. The length of the list is limited to 10,000 items.<br><br> The field [ASimMatchingHostname](normalization-common-fields.md#asimmatchinghostname) is set with the one of the values `SrcHostname`, `DstHostname`, or `Both` to reflect the matching fields or fields. |
| **dvcaction** | dynamic | Filter only network sessions for which the [Device Action field](#dvcaction) is any of the values listed. | | **eventresult** | String | Filter only network sessions with a specific **EventResult** value. |
The following list mentions fields that have specific guidelines for Network Ses
| <a name="eventsubtype"></a>**EventSubType** | Optional | String | Additional description of the event type, if applicable. <br> For Network Session records, supported values include:<br>- `Start`<br>- `End` | | **EventResult** | Mandatory | Enumerated | If the source device does not provide an event result, **EventResult** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventResult** should be `Failure`. Otherwise, **EventResult** should be `Success`. | | **EventSchema** | Mandatory | String | The name of the schema documented here is `NetworkSession`. |
-| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.2`. |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.3`. |
| <a name="dvcaction"></a>**DvcAction** | Optional | Enumerated | The action taken on the network session. Supported values are:<br>- `Allow`<br>- `Deny`<br>- `Drop`<br>- `Drop ICMP`<br>- `Reset`<br>- `Reset Source`<br>- `Reset Destination`<br>- `Encrypt`<br>- `Decrypt`<br>- `VPNroute`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. The original value should be stored in the [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction) field.<br><br>Example: `drop` | | **EventSeverity** | Optional | Enumerated | If the source device does not provide an event severity, **EventSeverity** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventSeverity** should be `Low`. Otherwise, **EventSeverity** should be `Informational`. | | **DvcInterface** | | | The DvcInterface field should alias either the [DvcInboundInterface](#dvcinboundinterface) or the [DvcOutboundInterface](#dvcoutboundinterface) fields. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| | - | | Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
-| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)|
### Network session fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="srcusername"></a>**SrcUsername** | Optional | String | The source username, including domain information when available. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). Use the simple form only if domain information isn't available.<br><br>Store the Username type in the [SrcUsernameType](#srcusernametype) field. If other username formats are available, store them in the fields `SrcUsername<UsernameType>`.<br><br>Example: `AlbertE` | | <a name="srcusernametype"></a>**SrcUsernameType** | Optional | UsernameType | Specifies the type of the username stored in the [SrcUsername](#srcusername) field. For a list of allowed values and further information refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` | | **SrcUserType** | Optional | UserType | The type of source user. For a list of allowed values and further information refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [SrcOriginalUserType](#srcoriginalusertype) field. |
-| <a name="srcoriginalusertype"></a>**SrcOriginalUserType** | Optional | String | The original destination user type, if provided by the reporting decice. |
+| <a name="srcoriginalusertype"></a>**SrcOriginalUserType** | Optional | String | The original destination user type, if provided by the reporting device. |
### Source application fields
These are the changes in version 0.2.2 of the schema:
- Added the fields `NetworkProtocolVersion`, `SrcSubscriptionId`, and `DstSubscriptionId`. - Deprecated `DstUserDomain` and `SrcUserDomain`.
+Theses are the changes in version 0.2.3 of the schema:
+- Added the `ipaddr_has_any_prefix` filtering parameter.
+- The `hostname_has_any` filtering parameter now matches either source or destination hostnames.
+- Added the fields `ASimMatchingHostname` and `ASimMatchingIpAddr`.
+ ## Next steps For more information, see:
sentinel Normalization About Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-parsers.md
Last updated 11/09/2021
-# Use Advanced Security Information Model (ASIM) parsers (Public preview)
+# Using the Advanced Security Information Model (ASIM) (Public preview)
[!INCLUDE [Banner for top of topics](./includes/banner.md)] Use Advanced Security Information Model (ASIM) parsers instead of table names in your Microsoft Sentinel queries to view data in a normalized format and to include all data relevant to the schema in your query. Refer to the table below to find the relevant parser for each schema.
-To understand how parsers fit within the ASIM architecture, refer to the [ASIM architecture diagram](normalization.md#asim-components).
- > [!IMPORTANT] > ASIM is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
When using ASIM in your queries, use **unifying parsers** to combine all sources
For example, the following query uses the built-in unifying DNS parser to query DNS events using the `ResponseCodeName`, `SrcIpAddr`, and `TimeGenerated` normalized fields:
+```kusto
+_Im_Dns(starttime=ago(1d), responsecodename='NXDOMAIN')
+ | summarize count() by SrcIpAddr, bin(TimeGenerated,15m)
+```
+
+The example uses [filtering parameters](#optimizing-parsing-using-parameters), which improve ASIM performance. The same example without filtering parameters would look like this:
+ ```kusto _Im_Dns
- | where isnotempty(ResponseCodeName)
+ | where TimeGenerated > ago(1d)
| where ResponseCodeName =~ "NXDOMAIN" | summarize count() by SrcIpAddr, bin(TimeGenerated,15m) ``` > [!NOTE]
-> When using the ASIM unifying filtering parsers in the **Logs** page, the time range selector is set to `custom`. You can still set the time range yourself. Alternatively, specify the time range using parser parameters.
->
-> Alternately, use the parameter-less parsers, which start with `_ASim_` for built-in parsers and `ASim` for workspace deployed parsers. Those parsers do not set the time-range picker to `custom` by default.
+> When using the ASIM parsers in the **Logs** page, the time range selector is set to `custom`. You can still set the time range yourself. Alternatively, specify the time range using parser parameters.
> The following table lists unifying parsers available:
-| Schema | Built-in filtering parser | Built-in parameter-less parser | workspace deployed filtering parser | workspace deployed parameter-less parser |
-| | - | | -- | |
-| Authentication | | | imAuthentication | ASimAuthentication |
-| Dns | _Im_Dns | _ASim_Dns | imDns | ASimDns |
-| File Event | | | | imFileEvent |
-| Network Session | _Im_NetworkSession | _ASim_NetworkSession | imNetworkSession | ASimNetworkSession |
-| Process Event | | | | - imProcess<br> - imProcessCreate<br> - imProcessTerminate |
-| Registry Event | | | | imRegistry |
-| Web Session | _Im_WebSession | _ASim_WebSession | imWebSession | ASimWebSession |
--
+| Schema | Unifying parser |
+| | - |
+| Authentication | imAuthentication |
+| Dns | _Im_Dns |
+| File Event | imFileEvent |
+| Network Session | _Im_NetworkSession |
+| Process Event | - imProcessCreate<br> - imProcessTerminate |
+| Registry Event | imRegistry |
+| Web Session | _Im_WebSession |
-## Source-specific parsers
-Unifying parsers use Source-specific parsers to handle the unique aspects of each source. However, source-specific parsers can also be used independently. For example, in an Infoblox-specific workbook, use the `vimDnsInfobloxNIOS` source-specific parser.
-
-## <a name="optimized-parsers"></a>Optimizing parsing using parameters
+## Optimizing parsing using parameters
Using parsers may impact your query performance, primarily from filtering the results after parsing. For this reason, many parsers have optional filtering parameters, which enable you to filter before parsing and enhance query performance. With query optimization and pre-filtering efforts, ASIM parsers often provide better performance when compared to not using normalization at all.
-When invoking the parser, use filtering parameters by adding one or more named parameters. For example, the following query start ensures that only DNS queries for non-existent domains are returned:
-
-```kusto
-_Im_Dns(responsecodename='NXDOMAIN')
-```
-
-The previous example is similar to the following query but is much more efficient.
-
-```kusto
-_Im_Dns | where ResponseCodeName == 'NXDOMAIN'
-```
+When invoking the parser, always use available filtering parameters by adding one or more named parameters to ensure optimal performance of the ASIM parsers.
Each schema has a standard set of filtering parameters documented in the relevant schema documentation. Filtering parameters are entirely optional. The following schemas support filtering parameters: - [Authentication](authentication-normalization-schema.md)
Each schema has a standard set of filtering parameters documented in the relevan
- [Network Session](network-normalization-schema.md#filtering-parser-parameters) - [Web Session](web-normalization-schema.md#filtering-parser-parameters)
+Every schema that supports filtering parameters supports at least the `starttime` and `enttime` parameters and using them is often critical for optimizing performance.
-## <a name="next-steps"></a>Next steps
+For an example of using filtering parsers see [Unifying parsers](#unifying-parsers) above.
-This article discusses the Advanced Security Information Model (ASIM) parsers. To learn how to develop your own parsers, see [Develop ASIM parsers](normalization-develop-parsers.md).
+## Next steps
Learn more about ASIM parsers:
sentinel Normalization Common Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-common-fields.md
Some fields are common to all ASIM schemas. Each schema might add guidelines for
## Standard Log Analytics fields
-The following fields are generated by Log Analytics for each record. They can be overridden when you [create a custom connector](create-custom-connector.md).
+The following fields are generated by Log Analytics, in most cases, for each record. They can be overridden when you [create a custom connector](create-custom-connector.md).
| Field | Type | Discussion | | - | -- | -- |
The following fields are generated by Log Analytics for each record. They can be
The following fields are defined by ASIM for all schemas:
+### Event fields
+ | Field | Class | Type | Description | ||-||--| | <a name="eventmessage"></a>**EventMessage** | Optional | String | A general message or description, either included in or generated from the record. |
The following fields are defined by ASIM for all schemas:
| <a name="eventschema"></a>**EventSchema** | Mandatory | String | The schema the event is normalized to. Each schema documents its schema name. | | <a name="eventschemaversion"></a>**EventSchemaVersion** | Mandatory | String | The version of the schema. Each schema documents its current version. | | <a name="eventreporturl"></a>**EventReportUrl** | Optional | String | A URL provided in the event for a resource that provides more information about the event.|+
+### Device fields
+
+The role of the device fields is different for different schemas and event types. For example, for the Network Session schema, device fields provide information about the device which generated the event, while for the Process Event schema, the device fields provide information on the device on which the process is executed. Each schema document specifies the role of the device for the schema.
+
+| Field | Class | Type | Description |
+||-||--|
| <a name="dvc"></a>**Dvc** | Mandatory | String | A unique identifier of the device on which the event occurred or which reported the event, depending on the schema. <br><br>This field might alias the [DvcFQDN](#dvcfqdn), [DvcId](#dvcid), [DvcHostname](#dvchostname), or [DvcIpAddr](#dvcipaddr) fields. For cloud sources, for which there is no apparent device, use the same value as the [Event Product](#eventproduct) field. | | <a name ="dvcipaddr"></a>**DvcIpAddr** | Recommended | IP address | The IP address of the device on which the event occurred or which reported the event, depending on the schema. <br><br>Example: `45.21.42.12` | | <a name ="dvchostname"></a>**DvcHostname** | Recommended | Hostname | The hostname of the device on which the event occurred or which reported the event, depending on the schema. <br><br>Example: `ContosoDc` | | <a name="dvcdomain"></a>**DvcDomain** | Recommended | String | The domain of the device on which the event occurred or which reported the event, depending on the schema.<br><br>Example: `Contoso` | | <a name="dvcdomaintype"></a>**DvcDomainType** | Recommended | Enumerated | The type of [DvcDomain](#dvcdomain). For a list of allowed values and further information refer to [DomainType](normalization-about-schemas.md#domaintype).<br><br>**Note**: This field is required if the [DvcDomain](#dvcdomain) field is used. | | <a name="dvcfqdn"></a>**DvcFQDN** | Optional | String | The hostname of the device on which the event occurred or which reported the event, depending on the schema. <br><br> Example: `Contoso\DESKTOP-1282V4D`<br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [DvcDomainType](#dvcdomaintype) field reflects the format used. |
+| <a name = "dvcdescription"></a>**DvcDescription** | Optional | String | A descriptive text associated with the device. For example: `Primary Domain Controller`. |
| <a name ="dvcid"></a>**DvcId** | Optional | String | The unique ID of the device on which the event occurred or which reported the event, depending on the schema. <br><br>Example: `41502da5-21b7-48ec-81c9-baeea8d7d669` | | <a name="dvcidtype"></a>**DvcIdType** | Optional | Enumerated | The type of [DvcId](#dvcid). For a list of allowed values and further information refer to [DvcIdType](#dvcidtype).<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the list, and store the others by using the field names **DvcAzureResourceId** and **DvcMDEid**, respectively.<br><br>**Note**: This field is required if the [DvcId](#dvcid) field is used. | | <a name="dvcmacaddr"></a>**DvcMacAddr** | Optional | MAC | The MAC address of the device on which the event occurred or which reported the event. <br><br>Example: `00:1B:44:11:3A:B7` |
The following fields are defined by ASIM for all schemas:
| <a name="dvcoriginalaction"></a>**DvcOriginalAction** | Optional | String | The original [DvcAction](#dvcaction) as provided by the reporting device. | | <a name="dvcinterface"></a>**DvcInterface** | Optional | String | The network interface on which data was captured. This field is typically relevant to network related activity which is captured by an intermediate or tap device. | | <a name="dvcsubscription"></a>**DvcSubscriptionId** | Optional | String | The cloud platform subscription ID the device belongs to. **DvcSubscriptionId** map to a subscription ID on Azure and to an account ID on AWS. |
-| <a name="additionalfields"></a>**AdditionalFields** | Optional | Dynamic | If your source provides additional information worth preserving, either keep it with the original field names or create the dynamic **AdditionalFields** field, and add to it the extra information as key/value pairs. |
+### Other fields
+
+| Field | Class | Type | Description |
+||-||--|
+| <a name="additionalfields"></a>**AdditionalFields** | Optional | Dynamic | If your source provides additional information worth preserving, either keep it with the original field names or create the dynamic **AdditionalFields** field, and add to it the extra information as key/value pairs. |
+| <a name="asimmatchingipaddr"></a>**ASimMatchingIpAddr** | Recommended | String | When a parser uses the `ipaddr_has_any_prefix` filtering parameters, this field is set with the one of the values `SrcIpAddr`, `DstIpAddr`, or `Both` to reflect the matching fields or fields. |
+| <a name="asimmatchinghostname"></a>**ASimMatchingHostname** | Recommended | String | When a parser uses the `hostname_has_any` filtering parameters, this field is set with the one of the values `SrcHostname`, `DstHostname`, or `Both` to reflect the matching fields or fields. |
## Vendors and products
The currently supported list of vendors and products used in the [EventVendor](#
| Microsoft | - AAD<br> - Azure Firewall<br> - Azure File Storage<br> - Azure NSG flows<br> - DNS Server<br> - Microsoft 365 Defender for Endpoint<br> - Microsoft Defender for IoT<br> - Security Events<br> - Sharepoint 365<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData <br> | Okta | - Okta<BR> - Auth0<br> | | Palo Alto | - PanOS<br> - CDL<br> |
+| PostgreSQL | PostgreSQL |
| Squid | Squid Proxy | | Vectra AI | Vectra Steam | | WatchGuard | Fireware |
sentinel Normalization Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-content.md
For more information, see:
- [Advanced Security Information Model (ASIM) overview](normalization.md) - [Advanced Security Information Model (ASIM) schemas](normalization-about-schemas.md) - [Advanced Security Information Model (ASIM) parsers](normalization-about-parsers.md)
+- [Using the Advanced Security Information Model (ASIM)](normalization-about-parsers.md)
- [Modifying Microsoft Sentinel content to use the Advanced Security Information Model (ASIM) parsers](normalization-modify-content.md)
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md
To understand how parsers fit within the ASIM architecture, refer to the [ASIM a
## Custom parser development process
-The following workflow describe the high level steps in developing a custom ASIM, source-specific parser:
+The following workflow describes the high level steps in developing a custom ASIM, source-specific parser:
+
+1. [Collect sample logs](#collect-sample-logs).
1. Identify the schemas or schemas that the events sent from the source represent. For more information, see [Schema overview](normalization-about-schemas.md).
-1. [Develop](#developing-parsers) one or more ASIM parsers for your source. You'll need to develop a parser for each schema relevant to the source.
+1. [Map](#mapping) the source event fields to the identified schema or schemas.
+
+1. [Develop](#developing-parsers) one or more ASIM parsers for your source. You'll need to develop a filtering parser and a parameter-less parser for each schema relevant to the source.
1. [Test](#test-parsers) your parser.
The following workflow describe the high level steps in developing a custom ASIM
1. Update the relevant ASIM unifying parser to reference the new custom parser. For more information, see [Managing ASIM parsers](normalization-manage-parsers.md).
+1. You might also want to [contribute your parsers](#contribute-parsers) to the primary ASIM distribution. Contributed parsers may also be made available in all workspaces as built-in parsers.
+ This article guides you through the process's development, testing, and deployment steps. > [!TIP] > Also watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the related [slide deck](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM). For more information, see [Next steps](#next-steps). >
+### Collect sample logs
-## Developing parsers
-
-A custom parser is a KQL query developed in the Microsoft Sentinel **Logs** page. The parser query has three parts:
-
-**Filter** > **Parse** > **Prepare fields**
+To build effective ASIM parsers, you need a representative set of logs, which in most case will require setting up the source system and connecting it to Microsoft Sentinel. If you do not have the source device available, cloud pay-as-you-go services let you deploy many devices for development and testing.
-### Prerequisites
+In addition, finding the vendor documentation and samples for the logs can help accelerate development and reduce mistakes by ensuring broad log format coverage.
-To develop a custom ASIM parser, you must have access to a workspace that stores relevant events.
+A representative set of logs should include:
+- Events with different event results.
+- Events with different response actions.
+- Different formats for username, hostname and IDs, and other fields that require value normalization.
> [!TIP] > Start a new custom parser using an existing parser for the same schema. Using an existing parser is especially important for filtering parsers to make sure they accept all the parameters required by the schema. >
+## Mapping
+
+Before you develop a parser, map the information available in the source event or events to the schema you identified:
+
+- Map all mandatory fields and preferably also recommended fields.
+- Try to map any information available from the source to normalized fields. If not available as part of th selected schema, consider mapping to fields available in other schemas.
+- Map values for fields at the source to the normalized values allowed by ASIM. The original value is stored in a separate field, such as `EventOriginalResultDetails`.
++
+## Developing parsers
+
+Develop both a filtering and a parameter-less parser for each relevant schema.
+
+A custom parser is a KQL query developed in the Microsoft Sentinel **Logs** page. The parser query has three parts:
+
+**Filter** > **Parse** > **Prepare fields**
++ ### Filtering #### Filtering the relevant records
Filtering in KQL is done using the `where` operator. For example, **Sysmon event
Event | where Source == "Microsoft-Windows-Sysmon" and EventID == 1 ```
+> [!IMPORTANT]
+> A parser should not filter by time. The query which uses the parser will apply a time range.
+ #### Filtering by source type using a Watchlist In some cases, the event itself does not contain information that would allow filtering for specific source types.
To use this sample in your parser:
#### Filtering based on parser parameters
-When developing [filtering parsers](normalization-about-parsers.md#optimized-parsers), make sure that your parser accepts the filtering parameters for the relevant schema, as documented in the reference article for that schema. Using an existing parser as a starting point ensures that your parser includes the correct function signature. In most cases, the actual filtering code is also similar for filtering parsers for the same schema.
+When developing [filtering parsers](normalization-about-parsers.md#optimizing-parsing-using-parameters), make sure that your parser accepts the filtering parameters for the relevant schema, as documented in the reference article for that schema. Using an existing parser as a starting point ensures that your parser includes the correct function signature. In most cases, the actual filtering code is also similar for filtering parsers for the same schema.
When filtering, make sure that you:
In addition to parsing string, the parsing phase may require more processing of
> Even so, the query is still much more efficient than using `case` for all values. >
+### Mapping values
+
+In many cases, the original value extracted needs to be normalized. For example, in ASIM a MAC address uses colons as separator, while the source may send a hyphen delimited MAC address. The primary operator for transforming values is `extend`, alongside a broad set of KQL string, numerical and date functions, as demonstrated in the [Parsing](#parsing) section above.
+
+Use `case`, `iff`, and `lookup` statements when there is a need to map a set of values to the values allowed by the target field.
+
+When each source value maps to a target value, define the mapping using the `datatable` operator and `lookup` to map. For example
+
+```KQL
+ let NetworkProtocolLookup = datatable(Proto:real, NetworkProtocol:string)[
+ 6, 'TCP',
+ 17, 'UDP'
+ ];
+ let DnsResponseCodeLookup=datatable(DnsResponseCode:int,DnsResponseCodeName:string)[
+ 0,'NOERROR',
+ 1,'FORMERR',
+ 2,'SERVFAIL',
+ 3,'NXDOMAIN',
+ ...
+ ];
+ ...
+ | lookup DnsResponseCodeLookup on DnsResponseCode
+ | lookup NetworkProtocolLookup on Proto
+```
+
+Notice that lookup is useful and efficient also when the mapping has only two possible values.
+
+When the mapping conditiond are more complex use the `iff` or `case` functions. The `iff` function enables mapping two values:
+
+```KQL
+| extend EventResult =
+ iff(EventId==257 and ResponseCode==0,'Success','FailureΓÇÖ)
+```
+
+The `case` function supports more than two target values. The example below shows how to combine `lookup` and `case`. The `lookup` example above returns an empty value in the field `DnsResponseCodeName` if the lookup value is not found. The `case` example below augments it by using the result of the `lookup` operation if available, and specifying additional conditions otherwise.
+
+```KQL
+ | extend DnsResponseCodeName =
+ case (
+ DnsResponseCodeName != "", DnsResponseCodeName,
+ DnsResponseCode between (3841 .. 4095), 'Reserved for Private Use',
+ 'Unassigned'
+ )
+
+```
+ ### Prepare fields in the result set The parser must prepare the fields in the results set to ensure that the normalized fields are used.
->[!NOTE]
-> We recommend that you do not remove any of the original fields that are not normalized from the result set, unless there is a compelling reason to do so, such as if they create confusion.
->
- The following KQL operators are used to prepare fields in your results set: |Operator | Description | When to use in a parser | ||||
-|**extend** | Creates calculated fields and adds them to the record. | `Extend` is used if the normalized fields are parsed or transformed from the original data. <br><br>For more information, see the example in the [Parsing](#parsing) section above. |
|**project-rename** | Renames fields. | If a field exists in the actual event and only needs to be renamed, use `project-rename`. <br><br>The renamed field still behaves like a built-in field, and operations on the field have much better performance. |
-|**project-away** | Removes fields. |Use `project-away` for specific fields that you want to remove from the result set. |
+|**project-away** | Removes fields. | Use `project-away` for specific fields that you want to remove from the result set. We recommend not removing the original fields that are not normalized from the result set, unless they create confusion or are very large and may have performance implications. |
|**project** | Selects fields that existed before, or were created as part of the statement, and removes all other fields. | Not recommended for use in a parser, as the parser should not remove any other fields that are not normalized. <br><br>If you need to remove specific fields, such as temporary values used during parsing, use `project-away` to remove them from the results. |-
+|**extend** | Add aliases. | Aside from its role in generating calculated fields, the `extend` operator is also used to create aliases. |
### Handle parsing variants
-In many cases, events in an event stream include variants that require different parsing logic.
-
-It is often tempting to build a parser from different subparsers, each handling another event variant that needs different parsing logic. Those subparsers, each a query by itself, are then unified using the `union` operator. This approach, while convenient, is *not* recommended as it significantly impacts the performance of the parser.
-
-When handling variants, use the following guidelines:
-
-|Scenario |Handling |
-|||
-|The different variants represent *different* event types, commonly mapped to different schemas | Use separate parsers. |
-|The different variants represent the *same* event type but are structured differently. | If the variants are known, such as when there is a method to differentiate between the events before parsing, use the `case` operator to select the correct `extract_all` to run and field mapping. <br><br>Example: [Infoblox DNS parser](https://aka.ms/AzSentinelInfobloxParser) |
-|`union` is unavoidable | When you must use `union`, make sure to use the following guidelines:<br><br>- Pre-filter using built-in fields in each one of the subqueries. <br>- Ensure that the filters are mutually exclusive. <br>- Consider not parsing less critical information, reducing the number of subqueries. |
-
+>[!IMPORTANT]
+> The different variants represent *different* event types, commonly mapped to different schemas, develop separate parsers
+
+In many cases, events in an event stream include variants that require different parsing logic. To parse different variants in a single parser either use conditional statements such as `iff` and `case`, or use a union structure.
+
+To use `union` to handle multiple variants, create a separate function for each variant and use the union statement to combine the results:
+
+``` Kusto
+let AzureFirewallNetworkRuleLogs = AzureDiagnostics
+ | where Category == "AzureFirewallNetworkRule"
+ | where isnotempty(msg_s);
+let parseLogs = AzureFirewallNetworkRuleLogs
+ | where msg_s has_any("TCP", "UDP")
+ | parse-where
+ msg_s with networkProtocol:string
+ " request from " srcIpAddr:string
+ ":" srcPortNumber:int
+ …
+ | project-away msg_s;
+let parseLogsWithUrls = AzureFirewallNetworkRuleLogs
+ | where msg_s has_all ("Url:","ThreatIntel:")
+ | parse-where
+ msg_s with networkProtocol:string
+ " request from " srcIpAddr:string
+ " to " dstIpAddr:string
+ …
+union parseLogs, parseLogsWithUrls…
+```
+To avoid duplicate events and excessive processing, make sure each function starts by filtering, using native fields, only the events that it is intended to parse. Also, if needed, use project-away at each branch, before the union.
## Deploy parsers
-Deploy parsers manually by copying them to the Azure Monitor Log page and saving your change. This method is useful for testing. For more information, see [Create a function](../azure-monitor/logs/functions.md).
+Deploy parsers manually by copying them to the Azure Monitor Log page and saving the query as a function. This method is useful for testing. For more information, see [Create a function](../azure-monitor/logs/functions.md).
To deploy a large number of parsers, we recommend using parser ARM templates, as follows:
You can also combine multiple templates to a single deploy process using [linked
## Test parsers
+This section describes that testing tools ASIM provides that enables you to test your parsers. That said, parsers are code, sometimes complex, and standard quality assurance practices such as code reviews are recommended in addition to automated testing.
+ ### Install ASIM testing tools To test ASIM, [deploy the ASIM testing tool](https://aka.ms/ASimTestingTools) to a Microsoft Sentinel workspace where:
To make sure that your parser produces a valid schema, use the ASIM schema teste
Handle the results as follows:
-| Message | Action |
-| - | |
-| **(0) Error: Missing mandatory field [\<Field\>]** | Add this field to your parser. In many cases, this would be a derived value or a constant value, and not a field already available from the source. |
-| **(0) Error: Missing mandatory alias [\<Field\>] aliasing existing column [\<Field\>]** | Add this alias to your parser. |
-| **(0) Error: Missing mandatory alias [\<Field\>] aliasing missing column [\<Field\>]** | This error accompanies a similar error for the aliased field. Correct the aliased field error and add this alias to your parser. |
-| **(0) Error: Missing recommended alias [\<Field\>] aliasing existing column [\<Field\>]** | Add this alias to your parser. |
-| **(0) Error: Missing optional alias [\<Field\>] aliasing existing column [\<Field\>]** | Add this alias to your parser. |
-| **(0) Error: type mismatch for field [\<Field\>]. It is currently [\<Type\>] and should be [\<Type\>]** | Make sure that the type of normalized field is correct, usually by using a [conversion function](/azure/data-explorer/kusto/query/scalarfunctions#conversion-functions) such as `tostring`. |
-| **(1) Warning: Missing recommended field [\<Field\>]** | Consider adding this field to your parser. |
-| **(1) Warning: Missing recommended alias [\<Field\>] aliasing non-existent column [\<Field\>]** | If you add the aliased field to the parser, make sure to add this alias as well. |
-| **(1) Warning: Missing optional alias [\<Field\>] aliasing non-existent column [\<Field\>]** | If you add the aliased field to the parser, make sure to add this alias as well. |
-| **(2) Info: Missing optional field [\<Field\>]** | While optional fields are often missing, it is worth reviewing the list to determine if any of the optional fields can be mapped from the source. |
-| **(2) Info: extra unnormalized field [\<Field\>]** | While unnormalized fields are valid, it is worth reviewing the list to determine if any of the unnormalized values can be mapped to an optional field. |
-
+| Error | Action |
+| -- | |
+| Missing mandatory field [\<Field\>] | Add the field to your parser. In many cases, this would be a derived value or a constant value, and not a field already available from the source. |
+| Missing field [\<Field\>] is mandatory when mandatory column [\<Field\>] exists | Add the field to your parser. In many cases this field denotes the types of the existing column it refers to. |
+| Missing field [\<Field\>] is mandatory when column [\<Field\>] exists | Add the field to your parser. In many cases this field denotes the types of the existing column it refers to. |
+| Missing mandatory alias [\<Field\>] aliasing existing column [\<Field\>] | Add the alias to your parser |
+| Missing recommended alias [\<Field\>] aliasing existing column [\<Field\>] | Add the alias to your parser |
+| Missing optional alias [\<Field\>] aliasing existing column [\<Field\>] | Add the alias to your parser |
+| Missing mandatory alias [\<Field\>] aliasing missing column [\<Field\>] | This error accompanies a similar error for the aliased field. Correct the aliased field error and add this alias to your parser. |
+| Type mismatch for field [\<Field\>]. It is currently [\<Type\>] and should be [\<Type\>] | Make sure that the type of normalized field is correct, usually by using a [conversion function](/azure/data-explorer/kusto/query/scalarfunctions#conversion-functions) such as `tostring`. |
+
+| Info | Action |
+| -- | |
+| Missing recommended field [\<Field\>] | Consider adding this field to your parser. |
+
+| Info | Action |
+| -- | |
+| Missing recommended alias [\<Field\>] aliasing non-existent column [\<Field\>] | If you add the aliased field to the parser, make sure to add this alias as well. |
+| Missing optional alias [\<Field\>] aliasing non-existent column [\<Field\>] | If you add the aliased field to the parser, make sure to add this alias as well. |
+ Missing optional field [\<Field\>] | While optional fields are often missing, it is worth reviewing the list to determine if any of the optional fields can be mapped from the source. |
+| Extra unnormalized field [\<Field\>] | While unnormalized fields are valid, it is worth reviewing the list to determine if any of the unnormalized values can be mapped to an optional field. |
> [!NOTE] > Errors will prevent content using the parser from working correctly. Warnings will not prevent content from working, but may reduce the quality of the results.
Handle the results as follows:
| - | | | **(0) Error: type mismatch for column [\<Field\>]. It is currently [\<Type\>] and should be [\<Type\>]** | Make sure that the type of normalized field is correct, usually by using a [conversion function](/azure/data-explorer/kusto/query/scalarfunctions#conversion-functions) such as `tostring`. | | **(0) Error: Invalid value(s) (up to 10 listed) for field [\<Field\>] of type [\<Logical Type\>]** | Make sure that the parser maps the correct source field to the output field. If mapped correctly, update the parser to transform the source value to the correct type, value or format. Refer to the [list of logical types](normalization-about-schemas.md#logical-types) for more information on the correct values and formats for each logical type. <br><br>Note that the testing tool lists only a sample of 10 invalid values. |
-| **(0) Error: Empty value in mandatory field [\<Field\>]** | Mandatory fields should be populated, not just defined. Check whether the field can be populated from other sources for records for which the current source is empty. |
-| **(1) Error: Empty value in recommended field [\<Field\>]** | Recommended fields should usually be populated. Check whether the field can be populated from other sources for records for which the current source is empty. |
-| **(1) Error: Empty value in alias [\<Field\>]** | Check whether the aliased field is mandatory or recommended, and if so, whether it can be populated from other sources. |
-
+| **(1) Warning: Empty value in mandatory field [\<Field\>]** | Mandatory fields should be populated, not just defined. Check whether the field can be populated from other sources for records for which the current source is empty. |
+| **(2) Info: Empty value in recommended field [\<Field\>]** | Recommended fields should usually be populated. Check whether the field can be populated from other sources for records for which the current source is empty. |
+| **(2) Info: Empty value in optional field [\<Field\>]** | Check whether the aliased field is mandatory or recommended, and if so, whether it can be populated from other sources. |
+Many of the messages also report the number of records which generated the message and their percentage of the total sample. This percentage is a good indicator of the importance of the issue. For example, for a recommended field:
+- 90% empty values may indicate a general parsing issue.
+- 25% empty values may indicate an event variant that was not parsed correctly.
+- A handful of empty values may be a negligible issue.
> [!NOTE] > Errors will prevent content using the parser from working correctly. Warnings will not prevent content from working, but may reduce the quality of the results. >
+## Contribute parsers
-### Check for incomplete parsing
+You may want to contribute the parser to the primary ASIM distribution. If accepted, the parsers will be available to every customer as ASIM built-in parsers.
-Check that fields are populated:
-- A field that is rarely or never populated may indicate incorrect parsing. -- A field that is usually populated but not always may indicate less common variants of the event are not parsed correctly.
+To contribute your parsers:
-You can use the following query to test how sparsely populated each field is.
+| Step | Description |
+| - | -- |
+| Develop the parsers | - Develop both a filtering parser and a parameter-less parser.<br>- Create a YAML file for the parser as described in [Deploying Parsers](#deploy-parsers) above.|
+| Test the parsers | - Make sure that your parsers pass all [testings](#test-parsers) with no errors.<br>- If any warnings are left, document them in the parser YAML file as described below. |
+| Contribute | - Create a pull request against the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel)<br>- Add to the PR your parsers YAML files to the ASIM parser folders (`/Parsers/ASim<schema>/Parsers`)<br>- Adds representative sample data to the sample data folder (`/Sample Data`) |
-```KQL
-<parser name>
-| where TimeGenerated > ago(<time period>)
-| project p = pack_all()
-| mv-expand f = p
-| project f
-| extend key = tostring(bag_keys(f)[0])
-| summarize total=count(), empty=countif(strlen(f[key]) == 0) by key
-| extend sparseness = todouble(empty)/todouble(total)
-| sort by sparseness desc
-```
-
-Set the time period to the longest that performance will allow.
-
-## <a name="next-steps"></a>Next steps
+### Documenting accepted warnings
+
+If warnings listed by the ASIM testing tools are considered valid for a parser, document the accepted warnings in parser YAML file using the Exceptions section as shown in the example below.
+
+``` YAML
+Exceptions:
+- Field: DnsQuery
+ Warning: Invalid value
+ Exception: May have values such as "1164-ms-7.1440-9fdc2aab.3b2bd806-978e-11ec-8bb3-aad815b5cd42" which are not valid domains names. Those are are related to TKEY RR requests.
+- Field: DnsQuery
+ Warning: Empty value in mandatory field
+ Exception: May be empty for requests for root servers and for requests for RR type DNSKEY
+```
+
+The warning specified in the YAML file should be a short form of the warning message uniquely identifying. The value is used to match warning messages when performing automated testings and ignore them.
+
+## Next steps
This article discusses developing ASIM parsers.
sentinel Normalization Modify Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-modify-content.md
Title: Modify content to use the Microsoft Sentinel Advanced Security Information Model (ASIM) | Microsoft Docs
-description: This article explains how to convert Microsoft Sentinel content to use the the Advanced Security Information Model (ASIM).
+description: This article explains how to convert Microsoft Sentinel content to use the Advanced Security Information Model (ASIM).
Last updated 11/09/2021
_Im_Dns(responsecodename='NXDOMAIN')
| summarize count() by SrcIpAddr, bin(TimeGenerated,15m) | where count_ > threshold | join kind=inner (imDns(responsecodename='NXDOMAIN')) on SrcIpAddr
-| extend timestamp = TimeGenerated, IPCustomEntity = SrcIpAddr```
+| extend timestamp = TimeGenerated, IPCustomEntity = SrcIpAddr
``` To use workspace-deployed ASIM parsers, replace the first line with the following code:
To use workspace-deployed ASIM parsers, replace the first line with the followin
```kusto imDns(responsecodename='NXDOMAIN') ```+ ### Differences between built-in and workspace-deployed parsers The two options in the example [above](#sample-normalization-for-analytics-rules) are functionally identical. The normalized, source-agnostic version has the following differences:
sentinel Normalization Parsers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-list.md
+
+ Title: List of Microsoft Sentinel Advanced Security Information Model (ASIM) parsers | Microsoft Docs
+description: This article list Advanced Security Information Model (ASIM) parsers .
++ Last updated : 05/02/2022+
+
+
+# List of Microsoft Sentinel Advanced Security Information Model (ASIM) parsers (Public preview)
++
+This document provides a list of Advanced Security Information Model (ASIM) parsers. For an overview of ASIM parsers refer to the [parsers overview](normalization-parsers-overview.md). To understand how parsers fit within the ASIM architecture, refer to the [ASIM architecture diagram](normalization.md#asim-components).
+
+> [!IMPORTANT]
+> ASIM is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+## Authentication parsers
+
+- **Windows sign-ins**
+ - Collected using the Log Analytics Agent or Azure Monitor Agent.
+ - Collected using either the Security Events connectors to the SecurityEvent table or using the WEF connector to the WindowsEvent table.
+ - Reported as Security Events (4624, 4625, 4634, and 4647).
+ - reported by Microsoft 365 Defender for Endpoint, collected using the Microsoft 365 Defender connector.
+- **Linux sign-ins**
+ - reported by Microsoft 365 Defender for Endpoint, collected using the Microsoft 365 Defender connector.
+ - reported by Microsoft Defender to IoT Endpoint.
+- **Azure Active Directory sign-ins**, collected using the Azure Active Directory connector. Separate parsers are provided for regular, Non-Interactive, Managed Identities and Service Principles Sign-ins.
+- **AWS sign-ins**, collected using the AWS CloudTrail connector.
+- **Okta authentication**, collected using the Okta connector.
+
+Deploy the parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/ASimAuthentication).
+
+## DNS parsers
+
+Microsoft Sentinel provides the following out-of-the-box, product-specific DNS parsers:
+
+| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
+| | | |
+|**Microsoft DNS Server**<br>Collected by the DNS connector<br> and the Log Analytics Agent | `_ASim_Dns_MicrosoftOMS` (regular) <br> `_Im_Dns_MicrosoftOMS` (filtering) <br><br> | `ASimDnsMicrosoftOMS` (regular) <br>`vimDnsMicrosoftOMS` (filtering) <br><br> |
+| **Microsoft DNS Server**<br>Collected by NXlog| `_ASim_Dns_MicrosoftNXlog` (regular)<br>`_Im_Dns_MicrosoftNXlog` (filtering)| `ASimDnsMicrosoftNXlog` (regular)<br> `vimDnsMicrosoftNXlog` (filtering)|
+| **Azure Firewall** | `_ASim_Dns_AzureFirewall` (regular)<br> `_Im_Dns_AzureFirewall` (filtering) | `ASimDnsAzureFirewall` (regular)<br>`vimDnsAzureFirewall` (filtering) |
+| **Sysmon for Windows** (event 22)<br> Collected by the Log Analytics Agent<br> or the Azure Monitor Agent,<br>supporting both the<br> `Event` and `WindowsEvent` tables | `_ASim_Dns_MicrosoftSysmon` (regular)<br> `_Im_Dns_MicrosoftSysmon` (filtering) | `ASimDnsMicrosoftSysmon` (regular)<br> `vimDnsMicrosoftSysmon` (filtering) |
+| **Cisco Umbrella** | `_ASim_Dns_CiscoUmbrella` (regular)<br> `_Im_Dns_CiscoUmbrella` (filtering) | `ASimDnsCiscoUmbrella` (regular)<br> `vimDnsCiscoUmbrella` (filtering) |
+| **Infoblox NIOS**<br><br>The InfoBlox parsers<br>require [configuring the relevant sources](normalization-manage-parsers.md#configure-the-sources-relevant-to-a-source-specific-parser).<br> Use `InfobloxNIOS` as the source type. | `_ASim_Dns_InfobloxNIOS` (regular)<br> `_Im_Dns_InfobloxNIOS` (filtering) | `ASimDnsInfobloxNIOS` (regular)<br> `vimDnsInfobloxNIOS` (filtering) |
+| **GCP DNS** | `_ASim_Dns_Gcp` (regular)<br> `_Im_Dns_Gcp` (filtering) | `ASimDnsGcp` (regular)<br> `vimDnsGcp` (filtering) |
+| **Corelight Zeek DNS events** | `_ASim_Dns_CorelightZeek` (regular)<br> `_Im_Dns_CorelightZeek` (filtering) | `ASimDnsCorelightZeek` (regular)<br> `vimDnsCorelightZeek` (filtering) |
+| **Vectra AI** |`_ASim_Dns_VectraIA` (regular)<br> `_Im_Dns_VectraIA` (filtering) | `AsimDnsVectraAI` (regular)<br> `vimDnsVectraAI` (filtering) |
+| **Zscaler ZIA** |`_ASim_Dns_ZscalerZIA` (regular)<br> `_Im_Dns_ZscalerZIA` (filtering) | `AsimDnsZscalerZIA` (regular)<br> `vimDnsSzcalerZIA` (filtering) |
+||||
+
+Deploy the workspace deployed parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/AsimDNS).
+
+## File Activity parsers
+
+Microsoft Sentinel provides the following out-of-the-box, product-specific File Activity parsers:
+
+- **Sysmon file activity events** (Events 11, 23, and 26), collected using the Log Analytics Agent or Azure Monitor Agent.
+- **Microsoft Office 365 SharePoint and OneDrive events**, collected using the Office Activity connector.
+- **Microsoft 365 Defender for Endpoint file events**
+- **Azure Storage**, including Blob, File, Queue, and Table Storage.
+
+Deploy the parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/ASimFileEvent).
+
+## Network Session parsers
+
+Microsoft Sentinel provides the following out-of-the-box, product-specific Network Session parsers:
+
+| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
+| | | |
+| **AWS VPC logs** collected using the AWS S3 connector |`_ASim_NetworkSession_AWSVPC` (regular)<br> `_Im_NetworkSession_AWSVPC` (filtering) | `ASimNetworkSessionAWSVPC` (regular)<br> `vimNetworkSessionAWSVPC` (filtering) |
+| **Azure Firewall logs** |`_ASim_NetworkSession_AzureFirewall` (regular)<br> `_Im_NetworkSession_AzureFirewall` (filtering) | `ASimNetworkSessionAzureFirewall` (regular)<br> `vimNetworkSessionAzureFirewall` (filtering) |
+| **Azure Monitor VMConnection** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_VMConnection` (regular)<br> `_Im_NetworkSession_VMConnection` (filtering) | `ASimNetworkSessionVMConnection` (regular)<br> `vimNetworkSessionVMConnection` (filtering) |
+| **Azure Network Security Groups (NSG) logs** collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_AzureNSG` (regular)<br> `_Im_NetworkSession_AzureNSG` (filtering) | `ASimNetworkSessionAzureNSG` (regular)<br> `vimNetworkSessionAzureNSG` (filtering) |
+| **Microsoft 365 Defender for Endpoint** | `_ASim_NetworkSession_Microsoft365Defender` (regular)<br><br>`_Im_NetworkSession_Microsoft365Defender` (filtering) | `ASimNetworkSessionMicrosoft365Defender` (regular)<br><br> `vimNetworkSessionMicrosoft365Defender` (filtering) |
+| **Microsoft Defender for IoT - Endpoint** |`_ASim_NetworkSession_MD4IoT` (regular)<br><br>`_Im_NetworkSession_MD4IoT` (filtering) | `ASimNetworkSessionMD4IoT` (regular)<br><br> `vimNetworkSessionMD4IoT` (filtering) |
+| **Palo Alto PanOS traffic logs** collected using CEF |`_ASim_NetworkSession_PaloAltoCEF` (regular)<br> `_Im_NetworkSession_PaloAltoCEF` (filtering) | `ASimNetworkSessionPaloAltoCEF` (regular)<br> `vimNetworkSessionPaloAltoCEF` (filtering) |
+| **Sysmon for Linux** (event 3)<br> Collected using the Log Analytics Agent<br> or the Azure Monitor Agent |`_ASim_NetworkSession_LinuxSysmon` (regular)<br><br>`_Im_NetworkSession_LinuxSysmon` (filtering) | `ASimNetworkSessionLinuxSysmon` (regular)<br><br> `vimNetworkSessionLinuxSysmon` (filtering) |
+| **Windows Firewall logs**<br>Collected as Windows events using the Log Analytics Agent (Event table) or Azure Monitor Agent (WindowsEvent table). Supports Windows events 5150 to 5159. |`_ASim_NetworkSession_`<br>`MicrosoftWindowsEventFirewall` (regular)<br><br>`_Im_NetworkSession_`<br>`MicrosoftWindowsEventFirewall` (filtering) | `ASimNetworkSession`<br>`MicrosoftWindowsEventFirewall` (regular)<br><br> `vimNetworkSession`<br>`MicrosoftWindowsEventFirewall` (filtering) |
+| **Zscaler ZIA firewall logs** |`_ASim_NetworkSessionZscalerZIA` (regular)<br> `_Im_NetworkSessionZscalerZIA` (filtering) | `AsimNetworkSessionZscalerZIA` (regular)<br> `vimNetowrkSessionSzcalerZIA` (filtering) |
+
+Deploy the workspace deployed parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/AsimNetworkSession).
+
+## Process Event parsers
+
+Microsoft Sentinel provides the following built-in, product-specific Process Event parsers:
+
+- **Security Events process creation (Event 4688)**, collected using the Log Analytics Agent or Azure Monitor Agent
+- **Security Events process termination (Event 4689)**, collected using the Log Analytics Agent or Azure Monitor Agent
+- **Sysmon process creation (Event 1)**, collected using the Log Analytics Agent or Azure Monitor Agent
+- **Sysmon process termination (Event 5)**, collected using the Log Analytics Agent or Azure Monitor Agent
+- **Microsoft 365 Defender for Endpoint process creation**
+
+Deploy Process Event parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/AsimProcessEvent).
+
+## Registry Event parsers
+
+Microsoft Sentinel provides the following built-in, product-specific Registry Event parsers:
+
+- **Security Events registry update (Event 4657**), collected using the Log Analytics Agent or Azure Monitor Agent
+- **Sysmon registry monitoring events (Events 12, 13, and 14)**, collected using the Log Analytics Agent or Azure Monitor Agent
+- **Microsoft 365 Defender for Endpoint registry events**
+
+Deploy Registry Event parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/AsimRegistryEvent).
+
+## Web Session parsers
+
+Microsoft Sentinel provides the following out-of-the-box, product-specific Web Session parsers:
+
+| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
+| | | |
+|**Squid Proxy** | `_ASim_WebSession_SquidProxy` (regular) <br> `_Im_WebSession_SquidProxy` (filtering) <br><br> | `ASimWebSessionSquidProxy` (regular) <br>`vimWebSessionSquidProxy` (filtering) <br><br> |
+| **Zscaler ZIA** |`_ASim_WebSessionZscalerZIA` (regular)<br> `_Im_WebSessionZscalerZIA` (filtering) | `AsimWebSessionZscalerZIA` (regular)<br> `vimWebSessionSzcalerZIA` (filtering) |
++
+These parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM).
+
+## <a name="next-steps"></a>Next steps
+
+Learn more about ASIM parsers:
+
+- [Use ASIM parsers](normalization-about-parsers.md)
+- [Develop custom ASIM parsers](normalization-develop-parsers.md)
+- [Manage ASIM parsers](normalization-manage-parsers.md)
+
+For more about ASIM, in general, see:
+
+- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)
+- [Advanced Security Information Model (ASIM) overview](normalization.md)
+- [Advanced Security Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced Security Information Model (ASIM) content](normalization-content.md)
sentinel Normalization Parsers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-overview.md
Each method has advantages over the other:
| **Disadvantages** |Cannot be directly modified by users. <br><br>Fewer parsers available. | Not used by built-in content. | | **When to use** | Use in most cases that you need ASIM parsers. | Use when deploying new parsers, or for parsers not yet available out-of-the-box. | -
-> [!TIP]
-> Using both built-in and workspace-deployed parsers is useful when you want to customize built-in parsers by adding custom, workspace-deployed parsers to the built-in parser hierarchy. For more information, see [Managing ASIM parsers](normalization-manage-parsers.md).
->
+It is recommended to use built-in parsers for schemas for which built-in parsers are available.
## Parser hierarchy ASIM includes two levels of parsers: **unifying** parser and **source-specific** parsers. The user usually uses the **unifying** parser for the relevant schema, ensuring all data relevant to the schema is queried. The **unifying** parser in turn calls **source-specific** parsers to perform the actual parsing and normalization, which is specific for each source.
+The unifying parser name is `_Im_<schema>` for built-in parsers and `im<schema>` for workspace deployed parsers, where `<schema>` stands for the specific schema it serves. sSource-specific parsers can also be used independently. For example, in an Infoblox-specific workbook, use the `vimDnsInfobloxNIOS` source-specific parser. You can find a list of source-specific parsers in the [ASIM parsers list](normalization-parsers-list.md).
++ >[!TIP] > The built-in parser hierarchy adds a layer to support customization. For more information, see [Managing ASIM parsers](normalization-develop-parsers.md).
sentinel Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization.md
Microsoft Sentinel ingests data from many sources. Working with various data typ
Sometimes, you'll need separate rules, workbooks, and queries, even when data types share common elements, such as firewall devices. Correlating between different types of data during an investigation and hunting can also be challenging.
-The Advanced Security Information Model (ASIM) is a layer that is located between these diverse sources and the user. ASIM follows the [robustness principal](https://en.wikipedia.org/wiki/Robustness_principle): **"Be strict in what you send, be flexible in what you accept"**. Using the robustness principal as design pattern, ASIM transforms Microsoft Sentinel's inconsistent and hard to use source telemetry to user friendly data.
+The Advanced Security Information Model (ASIM) is a layer that is located between these diverse sources and the user. ASIM follows the [robustness principle](https://en.wikipedia.org/wiki/Robustness_principle): **"Be strict in what you send, be flexible in what you accept"**. Using the robustness principle as design pattern, ASIM transforms Microsoft Sentinel's inconsistent and hard to use source telemetry to user friendly data.
This article provides an overview of the Advanced Security Information Model (ASIM), its use cases and major components. Refer to the [next steps](#next-steps) section for more details.
ASIM includes the following components:
|Component |Description | ||| |**Normalized schemas** | Cover standard sets of predictable event types that you can use when building unified capabilities. <br><br>Each schema defines the fields that represent an event, a normalized column naming convention, and a standard format for the field values. <br><br> ASIM currently defines the following schemas:<br> - [Authentication Event](authentication-normalization-schema.md)<br> - [DHCP Activity](dhcp-normalization-schema.md)<br> - [DNS Activity](dns-normalization-schema.md)<br> - [File Activity](file-event-normalization-schema.md) <br> - [Network Session](./network-normalization-schema.md)<br> - [Process Event](process-events-normalization-schema.md)<br> - [Registry Event](registry-event-normalization-schema.md)<br>- [User Management](user-management-normalization-schema.md)<br> - [Web Session](web-normalization-schema.md)<br><br>For more information, see [ASIM schemas](normalization-about-schemas.md). |
-|**Parsers** | Map existing data to the normalized schemas using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). <br><br>Many ASIM parsers are available out of the box with Microsoft Sentinel. More parsers, and versions of the built-in parsers that can be modified can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelASim). <br><br>For more information, see [ASIM parsers](normalization-about-parsers.md). |
+|**Parsers** | Map existing data to the normalized schemas using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). <br><br>Many ASIM parsers are available out of the box with Microsoft Sentinel. More parsers, and versions of the built-in parsers that can be modified can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelASim). <br><br>For more information, see [ASIM parsers](normalization-parsers-overview.md). |
|**Content for each normalized schema** | Includes analytics rules, workbooks, hunting queries, and more. Content for each normalized schema works on any normalized data without the need to create source-specific content. <br><br>For more information, see [ASIM content](normalization-content.md). |
sentinel Process Events Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/process-events-normalization-schema.md
For more information about normalization in Microsoft Sentinel, see [Normalizati
## Parsers
-Microsoft Sentinel provides the following built-in, product-specific process event parsers:
--- **Security Events process creation (Event 4688)**, collected using the Log Analytics Agent or Azure Monitor Agent-- **Security Events process termination (Event 4689)**, collected using the Log Analytics Agent or Azure Monitor Agent-- **Sysmon process creation (Event 1)**, collected using the Log Analytics Agent or Azure Monitor Agent-- **Sysmon process termination (Event 5)**, collected using the Log Analytics Agent or Azure Monitor Agent-- **Microsoft 365 Defender for Endpoint process creation**- To use the unifying parsers that unify all of listed parsers and ensure that you analyze across all the configured sources, use the following table names in your queries: -- **imProcessCreate**, for queries that require process creation information. These queries are the most common case.
+- **imProcessCreate** for queries that require process creation information. These queries are the most common case.
- **imProcessTerminate** for queries that require process termination information.-- **imProcessEvents** for queries that require both process creation and termination information. In such cases, the `EventType` field enables you to distinguish between the events, and is set to `ProcessCreate` or `ProcessTerminate`, respectively. Process termination events generally include a lot less information than process creation events.
-Deploy the [unifying and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelProcessEvents).
+For the list of the Process Event parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#process-event-parsers).
+
+Deploy the Authentication parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelProcessEvents).
For more information, see [ASIM parsers overview](normalization-parsers-overview.md). ## Add your own normalized parsers
-When implementing custom parsers for the [Process Event](normalization-about-schemas.md#the-process-entity) information model, name your KQL functions using the following syntax: `imProcess<Type><vendor><Product>`, where `Type` is either `Create`, `Terminate`, or `Event` if the parser implements both creation and termination events.
+When implementing custom parsers for the [Process Event](normalization-about-schemas.md#the-process-entity) information model, name your KQL functions using the following syntax: `imProcessCreate<vendor><Product>` and `imProcessTerminate<vendor><Product>`. Replace `im` with `ASim` for the parameter-less version
Add your KQL function to the `imProcess<Type>` and `imProcess` unifying parsers to ensure that any content using the [Process Event](normalization-about-schemas.md#the-process-entity) model also uses your new parser.
-## Normalized content for process activity data
+### Filtering parser parameters
-The following Microsoft Sentinel content works with any process activity that's normalized using the Advanced Security Information Model:
+The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimizing-parsing-using-parameters). While these parsers are optional, they can improve your query performance.
-- **Analytics rules**:
+The following filtering parameters are available:
- - [Probable AdFind Recon Tool Usage (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimProcess/imProcess_AdFind_Usage.yaml)
- - [Base64 encoded Windows process command-lines (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimProcess/imProcess_base64_encoded_pefile.yaml)
- - [Malware in the recycle bin (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimProcess/imProcess_malware_in_recyclebin.yaml)
- - [NOBELIUM - suspicious rundll32.exe execution of vbscript (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimProcess/imProcess_NOBELIUM_SuspiciousRundll32Exec.yaml)
- - [SUNBURST suspicious SolarWinds child processes (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimProcess/imProcess_SolarWinds_SUNBURST_Process-IOCs.yaml)
+| Name | Type | Description |
+|-|--|-|
+| **starttime** | datetime | Filter only process events occurred at or after this time. |
+| **endtime** | datetime | Filter only process events queries that occurred at or before this time. |
+| **commandline_has_any** | dynamic | Filter only process events for which the command line executed has **any** of the listed values. The length of the list is limited to 10,000 items. |
+| **commandline_has_all**| dynamic | Filter only process events for which the command line executed has **all** of the listed values.. The length of the list is limited to 10,000 items.
+| **commandline_has_any_ip_prefix** | dynamic | Filter only process events for which the command line executed has **all** of the listed IP addresses or IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items. |
+| **actingprocess_has_any** | dynamic | Filter only process events for which the acting process name, which includes the entire process path, has any of the listed values. The length of the list is limited to 10,000 items. |
+| **targetprocess_has_any** | dynamic| Filter only process events for which the target process name, which includes the entire process path, has any of the listed values. The length of the list is limited to 10,000 items. |
+| **parentprocess_has_any** | dynamic | Filter only process events for which the target process name, which includes the entire process path, has any of the listed values. The length of the list is limited to 10,000 items. |
+| **targetusername_has** or **actorusername_has** | string| Filter only process events for which the target username (for process create events), or actor username (for process terminate events) has any of the listed values. The length of the list is limited to 10,000 items. |
+| **dvcipaddr_has_any_prefix** | dynamic | Filter only process events for which the device IP address matches any of the listed IP addresses or IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.|
+| **dvchostname_has_any**| dynamic | Filter only process events for which the device hostname has any of the listed values. The length of the list is limited to 10,000 items. |
+| **eventtype**| string | Filter only process events of the specified type. |
- For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md).
-- **Hunting queries**:
- - [Cscript script daily summary breakdown (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_cscript_summary.yaml)
- - [Enumeration of users and groups (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_enumeration_user_and_group.yaml)
- - [Exchange PowerShell Snapin Added (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_ExchangePowerShellSnapin.yaml)
- - [Host Exporting Mailbox and Removing Export (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_HostExportingMailboxAndRemovingExport.yaml)
- - [Invoke-PowerShellTcpOneLine Usage (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_Invoke-PowerShellTcpOneLine.yaml)
- - [Nishang Reverse TCP Shell in Base64 (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_NishangReverseTCPShellBase64.yaml)
- - [Summary of users created using uncommon/undocumented commandline switches (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_persistence_create_account.yaml)
- - [Powercat Download (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_PowerCatDownload.yaml)
- - [PowerShell downloads (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_powershell_downloads.yaml)
- - [Entropy for Processes for a given Host (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_ProcessEntropy.yaml)
- - [SolarWinds Inventory (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_SolarWindsInventory.yaml)
- - [Suspicious enumeration using Adfind tool (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_Suspicious_enumeration_using_adfind.yaml)
- - [Windows System Shutdown/Reboot (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_Windows%20System%20Shutdown-Reboot(T1529).yaml)
- - [Certutil (LOLBins and LOLScripts, Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_Certutil-LOLBins.yaml)
- - [Rundll32 (LOLBins and LOLScripts, Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/inProcess_SignedBinaryProxyExecutionRundll32.yaml)
- - [Uncommon processes - bottom 5% (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_uncommon_processes.yaml)
- - [Unicode Obfuscation in Command Line](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/UnicodeObfuscationInCommandLine.yaml)
- For more information, see [Hunt for threats with Microsoft Sentinel](hunting.md).
+or example, to filter only authentication events from the last day to a specific user, use:
+
+```kql
+imProcessCreate (targetusername_has = 'johndoe', starttime = ago(1d), endtime=now())
+```
+
+> [!TIP]
+> To pass a literal list to parameters that expect a dynamic value, explicitly use a [dynamic literal](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals.md). For example: `dynamic(['192.168.','10.'])`.
+>
+
+## Normalized content
+
+For a full list of analytics rules that use normalized process events, see [Process Event security content](normalization-content.md#process-activity-security-content).
## Schema details
The following list mentions fields that have specific guidelines for process act
| Field | Class | Type | Description | ||-||--| | **EventType** | Mandatory | Enumerated | Describes the operation reported by the record. <br><br>For Process records, supported values include: <br>- `ProcessCreated` <br>- `ProcessTerminated` |
-| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1` |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1.2` |
| **EventSchema** | Optional | String | The name of the schema documented here is `ProcessEvent`. | | **Dvc** fields| | | For process activity events, device fields refer to the system on which the process was executed. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| | - | | Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
-| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)|
The fields listed in the table below are specific to Process events, but are sim
The process event schema references the following entities, which are central to process creation and termination activity: -- **Actor**. The user that initiated the process creation or termination.-- **ActingProcess**. The process used by the Actor to initiate the process creation or termination.-- **TargetProcess**. The new process.-- **TargetUser**. The user whose credentials are used to create the new process.-- **ParentProcess**. The process that initiated the Actor Process.
+- **Actor** - The user that initiated the process creation or termination.
+- **ActingProcess** - The process used by the Actor to initiate the process creation or termination.
+- **TargetProcess** - The new process.
+- **TargetUser** - The user whose credentials are used to create the new process.
+- **ParentProcess** - The process that initiated the Actor Process.
+
+### Aliases
| Field | Class | Type | Description | ||--||--| | **User** | Alias | | Alias to the [TargetUsername](#targetusername). <br><br>Example: `CONTOSO\dadmin` | | **Process** | Alias | | Alias to the [TargetProcessName](#targetprocessname) <br><br>Example: `C:\Windows\System32\rundll32.exe`| | **CommandLine** | Alias | | Alias to [TargetProcessCommandLine](#targetprocesscommandline) |
-| **Hash** | Alias | | Alias to the best available hash. |
-| <a name="actorusername"></a>**ActorUsername** | Mandatory | String | The user name of the user who initiated the event. <br><br>Example: `CONTOSO\WIN-GG82ULGC9GO$` |
-| **ActorUsernameType** | Mandatory | Enumerated | Specifies the type of the user name stored in the [ActorUsername](#actorusername) field. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `Windows` |
-| <a name="actoruserid"></a>**ActorUserId** | Recommended | String | A unique ID of the Actor. The specific ID depends on the system generating the event. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-5-18` |
-| **ActorUserIdType**| Recommended | String | The type of the ID stored in the [ActorUserId](#actoruserid) field. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `SID` |
+| **Hash** | Alias | | Alias to the best available hash for the target process. |
++
+### Actor fields
+
+| Field | Class | Type | Description |
+||--||--|
+| <a name="actorusername"></a>**ActorUsername** | Mandatory | String | The Actor username, including domain information when available. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). Use the simple form only if domain information isn't available.<br><br>Store the Username type in the [ActorUsernameType](#actorusernametype) field. If other username formats are available, store them in the fields `ActorUsername<UsernameType>`.<br><br>Example: `AlbertE` |
+| <a name="actorusernametype"></a>**ActorUsernameType** | Mandatory | Enumerated | Specifies the type of the user name stored in the [ActorUsername](#actorusername) field. For a list of allowed values and further information refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` |
+| <a name="actoruserid"></a>**ActorUserId** | Recommended | String | A machine-readable, alphanumeric, unique representation of the Actor. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-12` |
+| **ActorUserIdType**| Recommended | String | The type of the ID stored in the [ActorUserId](#actoruserid) field. For a list of allowed values and further information refer to [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md). |
| **ActorSessionId** | Optional | String | The unique ID of the login session of the Actor. <br><br>Example: `999`<br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows this value must be numeric. <br><br>If you are using a Windows machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
+| **ActorUserType** | Optional | UserType | The type of Actor. For a list of allowed values and further information refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [ActorOriginalUserType](#actororiginalusertype) field. |
+| <a name="actororiginalusertype"></a>**ActorOriginalUserType** | Optional | String | The original destination user type, if provided by the reporting device. |
+
+### Acting process fields
+
+| Field | Class | Type | Description |
+||--||--|
| **ActingProcessCommandLine** | Optional | String | The command line used to run the acting process. <br><br>Example: `"choco.exe" -v` | | **ActingProcessName** | Optional | string | The name of the acting process. This name is commonly derived from the image or executable file that's used to define the initial code and data that's mapped into the process' virtual address space.<br><br>Example: `C:\Windows\explorer.exe` | | **ActingProcessFileCompany** | Optional | String | The company that created the acting process image file. <br><br> Example: `Microsoft` |
The process event schema references the following entities, which are central to
| **ActingProcessCreationTime** | Optional | DateTime | The date and time when the acting process was started. | | **ActingProcessTokenElevation** | Optional | String | A token indicating the presence or absence of User Access Control (UAC) privilege elevation applied to the acting process. <br><br>Example: `None`| | **ActingProcessFileSize** | Optional | Long | The size of the file that ran the acting process. |+
+### Parent process fields
+
+| Field | Class | Type | Description |
+||--||--|
| **ParentProcessName** | Optional | string | The name of the parent process. This name is commonly derived from the image or executable file that's used to define the initial code and data that's mapped into the process' virtual address space.<br><br>Example: `C:\Windows\explorer.exe` | | **ParentProcessFileCompany** | Optional | String |The name of the company that created the parent process image file. <br><br> Example: `Microsoft` | | **ParentProcessFileDescription** | Optional | String | The description from the version information in the parent process image file. <br><br>Example: `Notepad++ : a free (GPL) source code editor`|
The process event schema references the following entities, which are central to
| **ParentProcessIMPHASH** | Optional | String | The Import Hash of all the library DLLs that are used by the parent process. | | **ParentProcessTokenElevation** | Optional | String |A token indicating the presence or absence of User Access Control (UAC) privilege elevation applied to the parent process. <br><br> Example: `None` | | **ParentProcessCreationTime** | Optional | DateTime | The date and time when the parent process was started. |
-| <a name="targetusername"></a>**TargetUsername** | Mandatory for process create events. | String | The username of the target user. <br><br>Example: `CONTOSO\WIN-GG82ULGC9GO$` |
-| **TargetUsernameType** | Mandatory for process create events. | Enumerated | Specifies the type of the username stored in the [TargetUsername](#targetusername) field. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br> Example: `Windows` |
-|<a name="targetuserid"></a> **TargetUserId** | Recommended | String | A unique ID of the target user. The specific ID depends on the system generating the event. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br> Example: `S-1-5-18` |
-| **TargetUserIdType** | Recommended | String | The type of the user ID stored in the [TargetUserId](#targetuserid) field. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity). <br><br> Example: `SID` |
++
+### Target user fields
+
+| Field | Class | Type | Description |
+||--||--|
+| <a name="targetusername"></a>**TargetUsername** | Mandatory for process create events. | String | The target username, including domain information when available. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). Use the simple form only if domain information isn't available.<br><br>Store the Username type in the [TargetUsernameType](#targetusernametype) field. If other username formats are available, store them in the fields `TargetUsername<UsernameType>`.<br><br>Example: `AlbertE` |
+| <a name="targetusernametype"></a>**TargetUsernameType** | Mandatory for process create events. | Enumerated | Specifies the type of the user name stored in the [TargetUsername](#targetusername) field. For a list of allowed values and further information refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` |
+|<a name="targetuserid"></a> **TargetUserId** | Recommended | String | A machine-readable, alphanumeric, unique representation of the target user. For the supported format for different ID types, refer to [the User entity](normalization-about-schemas.md#the-user-entity). <br><br>Example: `S-1-12` |
+| **TargetUserIdType** | Recommended | String | The type of the ID stored in the [TargetUserId](#targetuserid) field. For a list of allowed values and further information refer to [UserIdType](normalization-about-schemas.md#useridtype) in the [Schema Overview article](normalization-about-schemas.md). |
| **TargetUserSessionId** | Optional | String |The unique ID of the target user's login session. <br><br>Example: `999` <br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
+| **TargetUserType** | Optional | UserType | The type of Actor. For a list of allowed values and further information refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [TargetOriginalUserType](#targetoriginalusertype) field. |
+| <a name="targetoriginalusertype"></a>**TargetOriginalUserType** | Optional | String | The original destination user type, if provided by the reporting device. |
++
+### Target process fields
+
+| Field | Class | Type | Description |
+||--||--|
| <a name="targetprocessname"></a>**TargetProcessName** | Mandatory | string |The name of the target process. This name is commonly derived from the image or executable file that's used to define the initial code and data that's mapped into the process' virtual address space. <br><br> Example: `C:\Windows\explorer.exe` | | **TargetProcessFileCompany** | Optional | String |The name of the company that created the target process image file. <br><br> Example: `Microsoft` | | **TargetProcessFileDescription** | Optional | String | The description from the version information in the target process image file. <br><br>Example: `Notepad++ : a free (GPL) source code editor` |
The process event schema references the following entities, which are central to
| **TargetProcessSHA256** | Optional | SHA256 | The SHA-256 hash of the target process image file. <br><br> Example: <br> `e81bb824c4a09a811af17deae22f22dd`<br>`2e1ec8cbb00b22629d2899f7c68da274` | | **TargetProcessSHA512** | Optional | SHA512 | The SHA-512 hash of the target process image file. | | **TargetProcessIMPHASH** | Optional | String | The Import Hash of all the library DLLs that are used by the target process. |
+| **HashType** | Recommended | String | The type of hash stored in the HASH alias field, allowed values are `MD5`, `SHA`, `SHA256`, `SHA512` and `IMPHASH`. |
| <a name="targetprocesscommandline"></a> **TargetProcessCommandLine** | Mandatory | String | The command line used to run the target process. <br><br> Example: `"choco.exe" -v` | | <a name="targetprocesscurrentdirectory"></a> **TargetProcessCurrentDirectory** | Optional | String | The current directory in which the target process is executed. <br><br> Example: `c:\windows\system32` | | **TargetProcessCreationTime** | Mandatory | DateTime | The product version from the version information of the target process image file. |
The process event schema references the following entities, which are central to
These are the changes in version 0.1.1 of the schema: - Added the field `EventSchema` - currently optional, but will become mandatory on Sep 1st, 2022.
+These are the changes in version 0.1.2 of the schema
+- Added the fields `ActorUserType`, `ActorOriginalUserType`, `TargetUserType`, `TargetOriginalUserType`, and `HashType`.
+ ## Next steps For more information, see:
sentinel Registry Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/registry-event-normalization-schema.md
For more information about normalization in Microsoft Sentinel, see [Normalizati
## Parsers
-Microsoft Sentinel provides the following built-in, product-specific registry event parsers:
--- **Security Events registry update (Event 4657**), collected using the Log Analytics Agent or Azure Monitor Agent-- **Sysmon registry monitoring events (Events 12, 13, and 14)**, collected using the Log Analytics Agent or Azure Monitor Agent-- **Microsoft 365 Defender for Endpoint registry events**- To use the unifying parser that unifies all of the built-in parsers, and ensure that your analysis runs across all the configured sources, use **imRegistry** as the table name in your query.
+For the list of the Process Event parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#registry-event-parsers)
+ Deploy the [unifying and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelRegistry). For more information, see [ASIM parsers](normalization-parsers-overview.md) and [Use ASIM parsers](normalization-about-parsers.md).
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| | - | | Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
-| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)|
sentinel Configure Snc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-snc.md
This section explains how to import a certificate so that it's trusted by your A
chmod +x ./sapcon-sentinel-kickstart.sh ```
-1. Run the script, specifying the following parameters:
+1. Run the script, specifying the following base parameters:
```bash ./sapcon-sentinel-kickstart.sh \ --use-snc \ --cryptolib <path to sapcryptolib.so> \ --sapgenpse <path to sapgenpse> \
- # CLIENT CERTIFICATE
- # If client certificate is in .crt/.key format
+ --server-cert <path to server certificate public key> \
+ ```
+ If the client certificate is in .crt/.key format, use the following switches:
+ ```bash
--client-cert <path to client certificate public key> \ --client-key <path to client certificate private key> \
- # If client certificate is in .pfx or .p12 format
+ ```
+ If client certificate is in .pfx or .p12 format
+ ```bash
--client-pfx <pfx filename> --client-pfx-passwd <password>
- # If client certificate issued by enterprise CA
- --cacert <path to ca certificate> # for each CA in the trust chain
- # SERVER CERTIFICATE
- --server-cert <path to server certificate public key> \
-
+ ```
+ If client certificate issued by enterprise CA, add the switch for **each** CA in the trust chain
+ ```bash
+ --cacert <path to ca certificate> #
``` For example:
This section explains how to import a certificate so that it's trusted by your A
--use-snc \ --cryptolib /home/azureuser/libsapcrypto.so \ --sapgenpse /home/azureuser/sapgenpse \
- # CLIENT CERTIFICATE
- # If client certificate is in .crt/.key format
--client-cert /home/azureuser/client.crt \ --client-key /home/azureuser/client.key \
- # If client certificate is in .pfx or .p12 format
- --client-pfx /home/azureuser/client.pfx \
- --client-pfx-passwd <password>
- # If client certificate issued by enterprise CA
--cacert /home/azureuser/issuingca.crt --cacert /home/azureuser/rootca.crt
- # SERVER CERTIFICATE
--server-cert /home/azureuser/server.crt \ ```
sentinel Configure Transport https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-transport.md
The following steps show the process for configuring the Transport Management Sy
Now that you've configured the Transport Management System, you'll be able to successfully complete the `STMS_IMPORT` transaction and you can continue [preparing your SAP environment](preparing-sap.md) for deploying the Continuous Threat Monitoring solution for SAP in Microsoft Sentinel. > [!div class="nextstepaction"]
-> [Deploy SAP Change Requests and configure authorization](preparing-sap.md#set-up-the-applications)
+> [Deploy SAP Change Requests and configure authorization](preparing-sap.md#import-the-crs)
Learn more about the Microsoft Sentinel SAP solutions:
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
Deployment of the SAP continuous threat monitoring solution is divided into the
## Data connector agent deployment overview
-The Continuous Threat Monitoring solution for SAP is built on first getting all your SAP log data into Microsoft Sentinel, so that all the other components of the solution can do their jobs. To accomplish this, you need to deploy the SAP data connector agent.
+For the Continuous Threat Monitoring solution for SAP to operate correctly, data must first be ingested from SAP system into Microsoft Sentinel. To accomplish this, you need to deploy the Continuous Threat Monitoring solution for SAP data connector agent.
-The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in other clouds, or on-premises. You install and configure this container using a *kickstart* script.
+The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in a third-party cloud, or on-premises. The recommended way for you to install and configure this container is by using a *kickstart* script, however you can choose to deploy the container [manually](?tabs=deploy-manually)
-The agent connects to your SAP system to pull the logs from it, and then sends those logs to your Microsoft Sentinel workspace. To do this, the agent has to authenticate to your SAP system - that's why you created a user and a role for the agent in your SAP system in the previous step.
+The agent connects to your SAP system to pull logs and other data from it, then sends those logs to your Microsoft Sentinel. To do this, the agent has to authenticate to your SAP system - that's why you created a user and a role for the agent in your SAP system in the previous step.
Your SAP authentication infrastructure, and where you deploy your VM, will determine how and where your agent configuration information, including your SAP authentication secrets, is stored. These are the options, in descending order of preference:
Your SAP authentication infrastructure, and where you deploy your VM, will deter
- An Azure Key Vault, accessed through an Azure AD **registered-application service principal** - A plaintext **configuration file**
-If your **SAP authentication** infrastructure is based on **PKI**, using **X.509 certificates**, your only option is to use a configuration file. Select the **Configuration file** tab below for the instructions to deploy your agent container.
+If your **SAP authentication** infrastructure is based on **SNC**, using **X.509 certificates**, your only option is to use a configuration file. Select the **Configuration file** tab below for the instructions to deploy your agent container.
If not, then your SAP configuration and authentication secrets can and should be stored in an [**Azure Key Vault**](../../key-vault/general/authentication.md). How you access your key vault depends on where your VM is deployed:
If not, then your SAP configuration and authentication secrets can and should be
# [Managed identity](#tab/managed-identity)
+1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
+1.
1. Run the following command to **Create a VM** in Azure (substitute actual names for the `<placeholders>`): ```azurecli az vm create --resource-group <resource group name> --name <VM Name> --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest --admin-username <azureuser> --public-ip-address "" --size Standard_D2as_v5 --generate-ssh-keys --assign-identity ```
+ For more information, see [Quickstart: Create a Linux virtual machine with the Azure CLI](../../virtual-machines/linux/quick-create-cli.md).
+
+ > [!IMPORTANT]
+ > After the VM is created, be sure to apply any security requirements and hardening procedures applicable in your organization.
+ >
The command above will create the VM resource, producing output that looks like this:
If not, then your SAP configuration and authentication secrets can and should be
``` 1. Copy the **systemAssignedIdentity** GUID, as it will be used in the coming steps.-
- For more information, see [Quickstart: Create a Linux virtual machine with the Azure CLI](../../virtual-machines/linux/quick-create-cli.md).
-
- > [!IMPORTANT]
- > After the VM is created, be sure to apply any security requirements and hardening procedures applicable in your organization.
- >
-
-1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`):
+
+1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`). If you'll be using an existing key vault, ignore this step:
```azurecli
- kvgp=<KVResourceGroup>
- kvname=<keyvaultname>
-
- #Create a key vault
az keyvault create \
- --name $kvname \
- --resource-group $kvgp
- ```
-
- If you'll be using an existing key vault, ignore this step.
+ --name <KeyVaultName> \
+ --resource-group <KeyVaultResourceGroupName>
+ ```
1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these when you run the deployment script in the coming steps. 1. Run the following command to **assign a key vault access policy** to the VM's system-assigned identity that you copied above (substitute actual names for the `<placeholders>`): ```azurecli
- az keyvault set-policy -n <key vault name> -g <key vault resource group> --object-id <VM system-assigned identity> --secret-permissions get list set
+ az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --object-id <VM system-assigned identity> --secret-permissions get list set
``` This policy will allow the VM to list, read, and write secrets from/to the key vault.
If not, then your SAP configuration and authentication secrets can and should be
wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh ```
- The script updates the OS components and installs the Azure CLI and Docker software.
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts, or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md)
1. **Follow the on-screen instructions** to enter your SAP and key vault details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
If not, then your SAP configuration and authentication secrets can and should be
# [Registered application](#tab/registered-application)
+1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
+1.
1. Run the following command to **create and register an application**: ```azurecli
If not, then your SAP configuration and authentication secrets can and should be
1. Copy the **appId**, **tenant**, and **password** from the output. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps.
-1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`):
+1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`). If you'll be using an existing key vault, ignore this step :
```azurecli
- kvgp=<KVResourceGroup>
- kvname=<keyvaultname>
-
- #Create a key vault
az keyvault create \
- --name $kvname \
- --resource-group $kvgp
- ```
-
- If you'll be using an existing key vault, ignore this step.
-
+ --name <KeyVaultName> \
+ --resource-group <KeyVaultResourceGroupName>
+ ```
1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps. 1. Run the following command to **assign a key vault access policy** to the registered application ID that you copied above (substitute actual names or values for the `<placeholders>`): ```azurecli
- az keyvault set-policy -n <key vault name> -g <key vault resource group> --spn <appid> --secret-permissions get list set
+ az keyvault set-policy -n <KeyVaultName> -g <KeyVaultResourceGroupName> --spn <appId> --secret-permissions get list set
``` For example:
If not, then your SAP configuration and authentication secrets can and should be
./sapcon-sentinel-kickstart.sh --keymode kvsi --appid aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa --appsecret ssssssssssssssssssssssssssssssssss -tenantid bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb -kvaultname <key vault name> ```
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values.
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts, or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md)
1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
If not, then your SAP configuration and authentication secrets can and should be
# [Configuration file](#tab/config-file)
+1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
+1.
1. Run the following commands to **download the deployment Kickstart script** from the Microsoft Sentinel GitHub repository and **mark it executable**: ```bash
If not, then your SAP configuration and authentication secrets can and should be
./sapcon-sentinel-kickstart.sh --keymode cfgf ```
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values.
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts, or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md)
1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
If not, then your SAP configuration and authentication secrets can and should be
-## Deploy the SAP data connector manually
+# [Manual Deployment](#tab/deploy-manually)
1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
-1. Install [Docker](https://www.docker.com/)
+1. Install [Docker](https://www.docker.com/) on the VM, following [recommended deployment steps](https://docs.docker.com/engine/install/) for the chosen operating system
1. Use the following commands (replacing <*SID*> with the name of the SAP instance) to create a folder to store the container configuration and metadata, and to download a sample systemconfig.ini file into that folder.
If not, then your SAP configuration and authentication secrets can and should be
docker create -d --restart unless-stopped -v /opt/sapcon/$sid/:/sapcon-app/sapcon/config/system --name sapcon-$sid sapcon ````
-1. Run the following command (replacing <*SID*> with the name of the SAP instance) to copy the SDK into the container.
+1. Run the following command (replacing <*SID*> with the name of the SAP instance and <*sdkfilename*> with full filename of the SAP NetWeaver SDK) to copy the SDK into the container.
````bash sdkfile=<sdkfilename>
If not, then your SAP configuration and authentication secrets can and should be
docker cp $sdkfile sapcon-$sid:/sapcon-app/inst/ ````
+1. Run the following command (replacing <*SID*> with the name of the SAP instance) to start the container.
+ ````bash
+ sid=<SID>
+ docker start sapcon-$sid
+ ````
+ ## Next steps Once connector is deployed, proceed to deploy Continuous Threat Monitoring for SAP solution content
sentinel Deploy Sap Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-security-content.md
Deploy the [SAP security content](sap-solution-security-content.md) from the Mic
The **Microsoft Sentinel - Continuous Threat Monitoring for SAP** solution enables the SAP data connector to be displayed in the Microsoft Sentinel **Data connectors** area. The solution also deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
-Add SAP-related watchlists to your Microsoft Sentinel workspace manually.
- To deploy SAP solution security content, do the following: 1. In Microsoft Sentinel, on the left pane, select **Content hub (Preview)**.
To deploy SAP solution security content, do the following:
:::image type="content" source="./media/deploy-sap-security-content/sap-solution.png" alt-text="Screenshot of the 'Microsoft Sentinel - Continuous Threat Monitoring for SAP' solution pane." lightbox="media/deploy-sap-security-content/sap-solution.png":::
-1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription, resource group, and Log Analytics workspace where you want to deploy the solution.
+1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription, resource group, and Log Analytics workspace (the one which is used by Microsoft Sentinel) where you want to deploy the solution.
1. Select **Next** to cycle through the **Data Connectors**, **Analytics**, and **Workbooks** tabs, where you can learn about the components that will be deployed with this solution.
- The default name for the workbook is **SAP - System Applications and Products**. Change it in the workbooks tab as needed.
- For more information, see [Microsoft Sentinel SAP solution: security content reference (public preview)](sap-solution-security-content.md). 1. On the **Review + create tab** pane, wait for the **Validation Passed** message, then select **Create** to deploy the solution.
To deploy SAP solution security content, do the following:
- **Threat Management** > **Workbooks** > **My workbooks**, to find the [built-in SAP workbooks](sap-solution-security-content.md#built-in-workbooks). - **Configuration** > **Analytics** to find a series of [SAP-related analytics rules](sap-solution-security-content.md#built-in-analytics-rules).
-1. Add SAP-related watchlists to use in your search, detection rules, threat hunting, and response playbooks. These watchlists provide the configuration for the Microsoft Sentinel SAP Continuous Threat Monitoring solution. Do the following:
-
- 1. Download SAP watchlists from the Microsoft Sentinel GitHub repository at https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists.
-
- 1. In the Microsoft Sentinel **Watchlists** area, add the watchlists to your Microsoft Sentinel workspace. Use the downloaded CSV files as the sources, and then customize them as needed for your environment.
-
- [![SAP-related watchlists added to Microsoft Sentinel.](./media/deploy-sap-security-content/sap-watchlists.png)](./media/deploy-sap-security-content/sap-watchlists.png#lightbox)
-
- For more information, see [Use Microsoft Sentinel watchlists](../watchlists.md) and [Available SAP watchlists](sap-solution-security-content.md#available-watchlists).
- 1. In Microsoft Sentinel, go to the **Microsoft Sentinel Continuous Threat Monitoring for SAP** data connector to confirm the connection: [![Screenshot of the Microsoft Sentinel Continuous Threat Monitoring for SAP data connector page.](./media/deploy-sap-security-content/sap-data-connector.png)](./media/deploy-sap-security-content/sap-data-connector.png#lightbox)
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Follow your deployment journey through this series of articles, in which you'll
| **5. Deploy SAP security content** | [Deploy SAP security content](deploy-sap-security-content.md) | **6. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure SAP data connector to use SNC](configure-snc.md)
-> [!NOTE]
-> Extra steps are required to configure communications between SAP data connector and SAP over a Secure Network Communications (SNC) connection. This is covered in [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md) section of the guide.
- ## Next steps Begin the deployment of SAP continuous threat monitoring solution by reviewing the Prerequisites
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
Track your SAP solution deployment journey through this series of articles:
> [!IMPORTANT] > - This article presents a [**step-by-step guide**](#deploy-change-requests) to deploying the required CRs. It's recommended for SOC engineers or implementers who may not necessarily be SAP experts.
-> - Experienced SAP administrators that are familiar with CR deployment process may prefer to get the appropriate CRs directly from the [**SAP environment validation steps**](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of the guide and deploy them. Note that the *NPLK900163* CR deploys a sample role, and the administrator may prefer to manually define the role according to the information in the [**Required ABAP authorizations**](#required-abap-authorizations) section below.
+> - Experienced SAP administrators that are familiar with CR deployment process may prefer to get the appropriate CRs directly from the [**SAP environment validation steps**](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of the guide and deploy them. Note that the *NPLK900206* CR deploys a sample role, and the administrator may prefer to manually define the role according to the information in the [**Required ABAP authorizations**](#required-abap-authorizations) section below.
> [!NOTE] >
-> It is *strongly recommended* that the deployment of SAP CRs be carried out by an experienced SAP system administrator.
+> It is *strongly recommended* that the deployment of SAP CRs is carried out by an experienced SAP system administrator.
> > The steps below may differ according to the version of the SAP system and should be considered for demonstration purposes only. >
To deploy the CRs, follow the steps outlined below:
cp -p R*.NPL /usr/sap/trans/data/ ```
-### Set up the applications
+### Import the CRs
1. Launch the **SAP Logon** application and sign in to the SAP GUI console.
To deploy the CRs, follow the steps outlined below:
:::image type="content" source="media/preparing-sap/import-history.png" alt-text="Screenshot of import history.":::
-1. The *NPLK900180* change request is expected to display a **Warning**. Select the entry to verify that the warnings displayed are of type "Table \<tablename\> was activated."
+1. The *NPLK900202* change request is expected to display a **Warning**. Select the entry to verify that the warnings displayed are of type "Table \<tablename\> was activated."
:::image type="content" source="media/preparing-sap/import-status.png" alt-text="Screenshot of import status display." lightbox="media/preparing-sap/import-status-lightbox.png":::
To deploy the CRs, follow the steps outlined below:
## Configure Sentinel role
-After the *NPLK900163* change request is deployed, a **/MSFTSEN/SENTINEL_CONNECTOR** role is created in SAP. If the role is created manually, it may bear a different name.
+After the *NPLK900206* change request is deployed, a **/MSFTSEN/SENTINEL_CONNECTOR** role is created in SAP. If the role is created manually, it may bear a different name.
In the examples shown here, we will use the role name **/MSFTSEN/SENTINEL_CONNECTOR**.
The following table lists the ABAP authorizations required to ensure that SAP lo
The required authorizations are listed here by log type. Only the authorizations listed for the types of logs you plan to ingest into Microsoft Sentinel are required. > [!TIP]
-> To create a role with all the required authorizations, deploy the SAP change request *NPLK900163* on the SAP system. This change request creates the **/MSFTSEN/SENTINEL_CONNECTOR** role that has all the necessary permissions for the data connector to operate.
+> To create a role with all the required authorizations, deploy the SAP change request *NPLK900206* on the SAP system. This change request creates the **/MSFTSEN/SENTINEL_CONNECTOR** role that has all the necessary permissions for the data connector to operate.
| Authorization Object | Field | Value | | -- | -- | -- |
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
To successfully deploy the SAP Continuous Threat Monitoring solution, you must m
| **System architecture** | The data connector component of the SAP solution is deployed as a Docker container, and each SAP client requires its own container instance.<br>The container host can be either a physical machine or a virtual machine, can be located either on-premises or in any cloud. <br>The VM hosting the container ***does not*** have to be located in the same Azure subscription as your Microsoft Sentinel workspace, or even in the same Azure AD tenant. | | **Virtual machine sizing recommendations** | **Minimum specification**, such as for a lab environment:<br>*Standard_B2s* VM, with:<br>- 2 cores<br>- 4 GB RAM<br><br>**Standard connector** (default):<br>*Standard_D2as_v5* VM or<br>*Standard_D2_v5* VM, with: <br>- 2 cores<br>- 8 GB RAM<br><br>**Multiple connectors**:<br>*Standard_D4as_v5* or<br>*Standard_D4_v5* VM, with: <br>- 4 cores<br>- 16 GB RAM | | **Administrative privileges** | Administrative privileges (root) are required on the container host machine. |
-| **Supported Linux versions** | Your Docker container host machine must run one of the following Linux distributions:<br>- Ubuntu 18.04 or higher<br>- SLES version 15 or higher<br>- RHEL version 7.7 or higher<br><br>If you have a different operating system, you can [deploy and configure the container manually](deploy-data-connector-agent-container.md#deploy-the-sap-data-connector-manually). |
+| **Supported Linux versions** | SAP Continuous Threat Monitoring data collection agent has been tested with the following Linux distributions:<br>- Ubuntu 18.04 or higher<br>- SLES version 15 or higher<br>- RHEL version 7.7 or higher<br><br>If you have a different operating system, you may need to [deploy and configure the container manually](deploy-data-connector-agent-container.md?tabs=deploy-manually) instead of using the kickstart script. |
| **Network connectivity** | Ensure that the container host has access to: <br>- Microsoft Sentinel <br>- Azure key vault (in deployment scenario where Azure key vault is used to store secrets<br>- SAP system via the following TCP ports: *32xx*, *5xx13*, *33xx*, *48xx* (when SNC is used), where *xx* is the SAP instance number. |
-| **Software utilities** | The [SAP data connector deployment script](reference-kickstart.md) installs the following required software on the container host VM (depending on the Linux distribution used, the list may vary slightly): <br>- [Unzip](http://infozip.sourceforge.net/UnZip.html)<br>- [NetCat](https://sectools.org/tool/netcat/)<br>- [Docker](https://www.docker.com/)<br>- [jq](https://stedolan.github.io/jq/)<br>- [curl](https://curl.se/)<br><br>Make sure that you also have an SAP user account in order to access the SAP software download page. |
+| **Software utilities** | The [SAP data connector deployment script](reference-kickstart.md) installs the following required software on the container host VM (depending on the Linux distribution used, the list may vary slightly): <br>- [Unzip](http://infozip.sourceforge.net/UnZip.html)<br>- [NetCat](https://sectools.org/tool/netcat/)<br>- [Docker](https://www.docker.com/)<br>- [jq](https://stedolan.github.io/jq/)<br>- [curl](https://curl.se/)<br><br>
### SAP prerequisites | Prerequisite | Description | | - | -- |
-| **Supported SAP versions** | We recommend using [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on the older [SAP_BASIS version 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows). |
-| **Required software** | SAP NetWeaver RFC SDK 7.50 ([Download here](https://aka.ms/sap-sdk-download)).<br>At the link, select **SAP NW RFC SDK 7.50** -> **Linux on X86_64 64BIT** -> **Download the latest version**. |
+| **Supported SAP versions** | SAP Continuous Threat Monitoring data collection agent works best with [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on the older [SAP_BASIS version 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows). |
+| **Required software** | SAP NetWeaver RFC SDK 7.50 ([Download here](https://aka.ms/sap-sdk-download)).<br>At the link, select **SAP NW RFC SDK 7.50** -> **Linux on X86_64 64BIT** -> **Download the latest version**.<br><br>Make sure that you also have an SAP user account in order to access the SAP software download page. |
| **SAP system details** | Make a note of the following SAP system details for use in this tutorial:<br>- SAP system IP address and FQDN hostname<br>- SAP system number, such as `00`<br>- SAP System ID, from the SAP NetWeaver system (for example, `NPL`) <br>- SAP client ID, such as `001` | | **SAP NetWeaver instance access** | The SAP data connector agent uses one of the following mechanisms to authenticate to the SAP system: <br>- SAP ABAP user/password<br>- A user with an X.509 certificate (This option requires additional configuration steps) |
-## SAP change request (CR) deployment
-
-Besides all the prerequisites listed above, a successful deployment of the SAP data connector depends on your SAP environment being properly configured and updated. This includes ensuring that the relevant SAP change requests (CRs), as well as a Microsoft-provided CR, are deployed on the SAP system and that a role is created in SAP to enable access for the SAP data connector.
-
-> [!NOTE]
-> Step-by-step instructions for deploying a CR and assigning the required role are available in the [**Deploying SAP CRs and configuring authorization**](preparing-sap.md) guide. Retrieve the required CRs from the links in the tables below and proceed to the step-by-step guide.
->
-> Experienced SAP administrators may choose to create the role manually and assign it the appropriate permissions. In such a case, it is **not** necessary to deploy the CR *NPLK900163*, but you must instead create a role using the recommendations outlined in [Expert: Deploy SAP CRs and deploy required ABAP authorizations](preparing-sap.md#required-abap-authorizations). In any case, you must still deploy CR *NPLK900180* to enable the SAP data connector agent to collect data from your SAP system successfully.
### SAP environment validation steps
-1. Ensure the following SAP notes are deployed in your SAP system, according to its version:
+#### Ensure the following SAP notes are deployed in your SAP system, according to its version:
+
+> [!NOTE]
+>
+> Step-by-step instructions for deploying a CR and assigning the required role are available in the [**Deploying SAP CRs and configuring authorization**](preparing-sap.md) guide. Determine which CRs need to be deployed, retrieve the required CRs from the links in the tables below and proceed to the step-by-step guide.
| SAP BASIS versions | Required note | | | | | - 750 SP01 to SP12<br>- 751 SP01 to SP06<br>- 752 SP01 to SP03 | [2641084 - Standardized read access to data of Security Audit Log](https://launchpad.support.sap.com/#/notes/2641084)* | | - 700 to 702<br>- 710 to 711<br>- 730<br>- 731<br>- 740<br>- 750 | [2173545: CD: CHANGEDOCUMENT_READ_ALL](https://launchpad.support.sap.com/#/notes/2173545)* | | - 700 to 702<br>- 710 to 711<br>- 730<br>- 731<br>- 740<br>- 750 to 752 | [2502336 - CD: RSSCD100 - read only from archive, not from database](https://launchpad.support.sap.com/#/notes/2502336)* |
-| | * A SAP account is required to access SAP notes |
+| | * An SAP account is required to access SAP notes |
-2. Download and install one of the following SAP change requests from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR), according to the SAP version in use:
+#### Retrieval of additional information from SAP
+To enable the Microsoft Sentinel Continuous Threat Monitoring data connector to retrieve certain information from SAP, you must deploy additional CRs from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR)
+- **SAP BASIS 7.5 SP12 and above**: Client IP Address information from security audit log
+- **ANY SAP BASIS version**: DB Table logs
-| SAP BASIS versions | Required CR |
+| SAP BASIS versions | Recommended CR |
| | |
-| - 750 and later | *NPLK900180*: [K900180.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900180.NPL), [R900180.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900180.NPL) |
-| - 740 | *NPLK900179*: [K900179.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900179.NPL), [R900179.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900179.NPL) |
+| - 750 and later | *NPLK900202*: [K900202.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900202.NPL), [R900202.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900202.NPL) |
+| - 740 | *NPLK900201*: [K900201.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900201.NPL), [R900201.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900201.NPL) |
| | |
-3. (Optional) Download and install the following SAP change request from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR) to create a role required for the SAP data connector agent to connect to your SAP system:
+#### Role configuration
+To allow Microsoft Sentinel Continuous Threat Monitoring data connector to connect to SAP system, a role needs to be created. Role can be created by deploying **NPLK900206** CR.
+Experienced SAP administrators may choose to create the role manually and assign it the appropriate permissions. In such a case, it is not necessary to deploy the CR *NPLK900206*, but you must instead create a role using the recommendations outlined in [Expert: Deploy SAP CRs and deploy required ABAP authorizations](preparing-sap.md#required-abap-authorizations).
-| SAP BASIS versions | Required CR |
+| SAP BASIS versions | Sample CR |
| | |
-| Any version | *NPLK900163** [K900163.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900163.NPL), [R900163.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900163.NPL)|
+| Any version | *NPLK900206** [K900206.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900206.NPL), [R900206.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900206.NPL)|
+| | * An SAP account is required to access SAP notes |
| | |
-> [!NOTE]
-> \* The optional NPLK900163 change request deploys a sample role
-- ## Next steps After verifying that all the prerequisites have been met, proceed to the next step to deploy the required CRs to your SAP system and configure authorization.
sentinel User Management Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/user-management-normalization-schema.md
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| | - | | Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
-| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)|
### Updated property fields
sentinel Web Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/web-normalization-schema.md
Web Session events may also include [User](network-normalization-schema.md#user)
## Parsers
-For more information about ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md) and [Use ASIM parsers](normalization-about-parsers.md).
+For more information about ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md).
### Unifying parsers
You can also use workspace-deployed `ImWebSession` and `ASimWebSession` parsers
### Out-of-the-box, source-specific parsers
-Microsoft Sentinel provides the following out-of-the-box, product-specific DNS parsers:
-
-| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
-| | | |
-|**Squid Proxy** | `_ASim_WebSession_SquidProxy` (regular) <br> `_Im_WebSession_SquidProxy` (filtering) <br><br> | `ASimWebSessionSquidProxy` (regular) <br>`vimWebSessionSquidProxy` (filtering) <br><br> |
-| **Zscaler ZIA** |`_ASim_WebSessionZscalerZIA` (regular)<br> `_Im_WebSessionZscalerZIA` (filtering) | `AsimWebSessionZscalerZIA` (regular)<br> `vimWebSessionSzcalerZIA` (filtering) |
--
-These parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM).
-
+For the list of the Web Session parsers Microsoft Sentinel provides out-of-the-box refer to the [ASIM parsers list](normalization-parsers-list.md#web-session-parsers)
### Add your own normalized parsers When implementing custom parsers for the Web Session information model, name your KQL functions using the following syntax:
When implementing custom parsers for the Web Session information model, name you
### Filtering parser parameters
-The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimized-parsers). While these parsers are optional, they can improve your query performance.
+The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#). While these parsers are optional, they can improve your query performance.
The following filtering parameters are available:
The following filtering parameters are available:
| **starttime** | datetime | Filter only Web sessions that **started** at or after this time. | | **endtime** | datetime | Filter only Web sessions that **started** running at or before this time. | | **srcipaddr_has_any_prefix** | dynamic | Filter only Web sessions for which the [source IP address field](network-normalization-schema.md#srcipaddr) prefix is in one of the listed values. Note that the list of values can include IP addresses as well as IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.|
+| **ipaddr_has_any_prefix** | dynamic | Filter only network sessions for which the [destination IP address field](network-normalization-schema.md#dstipaddr) or [source IP address field](network-normalization-schema.md#srcipaddr) prefix is in one of the listed values. Prefixes should end with a `.`, for example: `10.0.`. The length of the list is limited to 10,000 items.<br><br>The field [ASimMatchingIpAddr](normalization-common-fields.md#asimmatchingipaddr) is set with the one of the values `SrcIpAddr`, `DstIpAddr`, or `Both` to reflect the matching fields or fields. |
| **url_has_any** | dynamic | Filter only Web sessions for which the [URL field](#url) has any of the values listed. If specified, and the session is not a web session, no result will be returned. The length of the list is limited to 10,000 items.| | **httpuseragent_has_any** | dynamic | Filter only web sessions for which the [user agent field](#httpuseragent) has any of the values listed. If specified, and the session is not a web session, no result will be returned. The length of the list is limited to 10,000 items. | | **eventresultdetails_in** | dynamic | Filter only web sessions for which the HTTP status code, stored in the [EventResultDetails](#eventresultdetails) field, is any of the values listed. |
The following list mentions fields that have specific guidelines for Web Session
| **EventResult** | Mandatory | Enumerated | Describes the event result, normalized to one of the following values: <br> - `Success` <br> - `Partial` <br> - `Failure` <br> - `NA` (not applicable) <br><br>For an HTTP session, `Success` is defined as a status code lower than `400`, and `Failure` is defined as a status code higher than `400`. For a list of HTTP status codes refer to [W3 Org](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html).<br><br>The source may provide only a value for the [EventResultDetails](#eventresultdetails) field, which must be analyzed to get the **EventResult** value. | | <a name="eventresultdetails"></a>**EventResultDetails** | Mandatory | String | For HTTP sessions, the value should be the HTTP status code. <br><br>**Note**: The value may be provided in the source record using different terms, which should be normalized to these values. The original value should be stored in the **EventOriginalResultDetails** field.| | **EventSchema** | Mandatory | String | The name of the schema documented here is `WebSession`. |
-| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.2` |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.3` |
| **Dvc** fields| | | For Web Session events, device fields refer to the system reporting the Web Session event. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| | - | | Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)|
-| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
+| Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)<br>- [DvcDescription](normalization-common-fields.md#dvcdescription)|
### Network session fields
If the event is reported by one of the endpoints of the web session, it may incl
### Schema updates
-The Web Session schema relies on the Network Session schema. Therefore, [Network Session schema updates](network-normalization-schema.md#schema-updates) apply to the Web Session schema as well.
+The Web Session schema relies on the Network Session schema. Therefore, [Network Session schema updates](network-normalization-schema.md#schema-updates) apply to the Web Session schema as well. The WebSession schema version has been updated to reflect this.
## Next steps
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
To configure the minimum TLS version for a Service Bus namespace with a template
Configuring the minimum TLS version requires api-version 2022-01-01-preview or later of the Azure Service Bus resource provider.
-## Check the minimum required TLS version for multiple namespaces
-
-To check the minimum required TLS version across a set of Service Bus namespaces with optimal performance, you can use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph Explorer, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md).
-
-Running the following query in the Resource Graph Explorer returns a list of Service Bus namespaces and displays the minimum TLS version for each namespace:
-
-```kusto
-resources
-| where type =~ 'Microsoft.ServiceBus/namespaces'
-| extend minimumTlsVersion = parse_json(properties).minimumTlsVersion
-| project subscriptionId, resourceGroup, name, minimumTlsVersion
-```
- ## Test the minimum TLS version from a client To test that the minimum required TLS version for a Service Bus namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
service-bus-messaging Transport Layer Security Enforce Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-enforce-minimum-version.md
Be careful to restrict assignment of these roles only to those who require the a
## Network considerations
-When a client sends a request to Service Bus namespace, the client establishes a connection with the public endpoint of the Service Bus namespace first, before processing any requests. The minimum TLS version setting is checked after the connection is established. If the request uses an earlier version of TLS than that specified by the setting, the connection will continue to succeed, but the request will eventually fail.
+When a client sends a request to Service Bus namespace, the client establishes a connection with the Service Bus namespace endpoint first, before processing any requests. The minimum TLS version setting is checked after the TLS connection is established. If the request uses an earlier version of TLS than that specified by the setting, the connection will continue to succeed, but the request will eventually fail.
> [!NOTE] > Due to backwards compatibility, namespaces that do not have the `MinimumTlsVersion` setting specified or have specified this as 1.0, we do not do any TLS checks when connecting via the SBMP protocol.
+Here're a few important points to consider:
+
+- A network trace would show the successful establishment of a TCP connection and successful TLS negotiation, before a 401 is returned if the TLS version used is less than the minimum TLS version configured.
+- Penetration or endpoint scanning on `yournamespace.servicebus.windows.net` will indicate the support for TLS 1.0, TLS 1.1, and TLS 1.2, as the service continues to support all these protocols. The minimum TLS version, enforced at the namespace level, indicates what the lowest TLS version the namespace will support.
+ ## Next steps See the following documentation for more information. - [Configure the minimum TLS version for a Service Bus namespace](transport-layer-security-configure-minimum-version.md) - [Configure Transport Layer Security (TLS) for a Service Bus client application](transport-layer-security-configure-client-version.md)-- [Use Azure Policy to audit for compliance of minimum TLS version for a Service Bus namespace](transport-layer-security-audit-minimum-version.md)
+- [Use Azure Policy to audit for compliance of minimum TLS version for a Service Bus namespace](transport-layer-security-audit-minimum-version.md)
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
The tables in this article outline the Service Fabric and platform versions that
| Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |
-| 9.0 RTO<br>9.0.1017.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
-| 8.2 CU2.1<br>8.2.1571.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
-| 8.2 CU2<br>8.2.1486.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
-| 8.2 CU1<br>8.2.1363.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
-| 8.2 RTO<br>8.2.1235.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
-| 8.1 CU3.1<br>8.1.337.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
-| 8.1 CU3<br>8.1.335.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
-| 8.1 CU2<br>8.1.329.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
-| 8.1 CU1<br>8.1.321.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
-| 8.1 RTO<br>8.1.316.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
-| 8.0 CU3<br>8.0.536.9590 | 7.1 CU10<br>7.1.510.9590 | 7.2 | Less than or equal to version 5.0 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | February 28, 2022 |
-| 8.0 CU2<br>8.0.521.9590 | 7.1 CU10<br>7.1.510.9590 | 7.2 | Less than or equal to version 5.0 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | February 28, 2022 |
-| 8.0 CU1<br>8.0.516.9590 | 7.1 CU10<br>7.1.510.9590 | 7.2 | Less than or equal to version 5.0 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | February 28, 2022 |
+| 9.0 RTO<br>9.0.1017.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
+| 8.2 CU2.1<br>8.2.1571.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
+| 8.2 CU2<br>8.2.1486.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 6.0 (Preview), .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
+| 8.2 CU1<br>8.2.1363.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
+| 8.2 RTO<br>8.2.1235.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
+| 8.1 CU3.1<br>8.1.337.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
+| 8.1 CU3<br>8.1.335.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
+| 8.1 CU2<br>8.1.329.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 3.1 (GA), <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
+| 8.1 CU1<br>8.1.321.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
+| 8.1 RTO<br>8.1.316.9590 | 7.2 CU7<br>7.2.477.9590 | 8.0 | Less than or equal to version 5.1 | .NET 5.0, >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | June 30, 2022 |
+| 8.0 CU3<br>8.0.536.9590 | 7.1 CU10<br>7.1.510.9590 | 7.2 | Less than or equal to version 5.0 | .NET 5.0, >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | February 28, 2022 |
+| 8.0 CU2<br>8.0.521.9590 | 7.1 CU10<br>7.1.510.9590 | 7.2 | Less than or equal to version 5.0 | .NET 5.0, >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | February 28, 2022 |
+| 8.0 CU1<br>8.0.516.9590 | 7.1 CU10<br>7.1.510.9590 | 7.2 | Less than or equal to version 5.0 | .NET 5.0, >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | February 28, 2022 |
| 8.0 RTO<br>8.0.514.9590 | 7.1 CU10<br>7.1.510.9590 | 7.2 | Less than or equal to version 5.0 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | February 28, 2022 |
-| 7.2 CU7<br>7.2.477.9590 | 7.0 CU9<br>7.0.478.9590 | 7.1 | Less than or equal to version 4.2 | .NET 5.0 (Preview support), >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2021 |
+| 7.2 CU7<br>7.2.477.9590 | 7.0 CU9<br>7.0.478.9590 | 7.1 | Less than or equal to version 4.2 | .NET 5.0 (Preview), >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2021 |
| 7.2 CU6<br>7.2.457.9590 | 7.0 CU4<br>7.0.470.9590 |7.1 | Less than or equal to version 4.2 | .NET 5.0 (Preview support), >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date)| November 30, 2021 | | 7.2 RTO-CU5<br>7.2.413.9590-7.2.452.9590 | 7.0 CU4<br>7.0.470.9590 | 7.1 |Less than or equal to version 4.2 | >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date)| November 30, 2021 | | 7.1<br>7.1.510.9590 |7.0 CU3<br>7.0.466.9590 |N/A | Less than or equal to version 4.1 | >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | July 31, 2021 |
Support for Service Fabric on a specific OS ends when support for the OS version
#### Ubuntu | OS version | Service Fabric support end date| OS Lifecycle link | | | | |
-| Ubuntu 18.04 | April 2028 | <a href="https://wiki.ubuntu.com/Releases">Ubuntu lifecycle</a>|
-| Ubuntu 16.04 | April 2024 | <a href="https://wiki.ubuntu.com/Releases">Ubuntu lifecycle</a>|
+| Ubuntu 20.04 | April 2025 | <a href="https://wiki.ubuntu.com/Releases">Ubuntu lifecycle</a>|
+| Ubuntu 18.04 | April 2023 | <a href="https://wiki.ubuntu.com/Releases">Ubuntu lifecycle</a>|
+| Ubuntu 16.04 | April 2021 | <a href="https://wiki.ubuntu.com/Releases">Ubuntu lifecycle</a>|
## Service Fabric version name and number reference The following table lists the version names of Service Fabric and their corresponding version numbers.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Azure Site Recovery allows you to perform global disaster recovery. You can repl
**Geographic cluster** | **Azure regions** -- | -- America | Canada East, Canada Central, South Central US, West Central US, East US, East US 2, West US, West US 2, West US 3, Central US, North Central US
-Europe | UK West, UK South, North Europe, West Europe, South Africa West, South Africa North, Norway East, France Central, Switzerland North, Germany West Central, UAE North, UAE Central (UAE is treated as part of the Europe geo cluster)
+Europe | UK West, UK South, North Europe, West Europe, South Africa West, South Africa North, Norway East, France Central, Switzerland North, Germany West Central, UAE North (UAE is treated as part of the Europe geo cluster)
Asia | South India, Central India, West India, Southeast Asia, East Asia, Japan East, Japan West, Korea Central, Korea South JIO | JIO India West Australia | Australia East, Australia Southeast, Australia Central, Australia Central 2
Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas
Germany | Germany Central, Germany Northeast China | China East, China North, China North2, China East2 Brazil | Brazil South
-Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers.<br/><br/> To use restricted regions as your primary or recovery region, please get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process).
+Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers, UAE Central for UAE North customers.<br/><br/> To use restricted regions as your primary or recovery region, please get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process) for both source and target subscriptions.
>[!NOTE] >
spring-cloud Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/expose-apps-gateway-tls-termination.md
Create an application gateway using the following steps to enable SSL terminatio
1. Fill in the required fields for creating the application gateway. Leave the default values as they are. 1. After you provide a value for the **Virtual network** field, the **Subnet** field appears. Create a separate subnet for the application gateway in the VNET, as shown in the following screenshot.
- :::image type="content" source="media/expose-apps-gateway-tls-termination/create-application-gateway-basics.png" alt-text="Azure portal screenshot of 'Create application gateway' page.":::
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-application-gateway-basics.png" alt-text="Screenshot of Azure portal 'Create application gateway' page.":::
1. Create a public IP address and assign it to the frontend of the application gateway, as shown in the following screenshot.
- :::image type="content" source="media/expose-apps-gateway-tls-termination/create-frontend-ip.png" alt-text="Azure portal screenshot showing Frontends tab of 'Create application gateway' page.":::
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-frontend-ip.png" alt-text="Screenshot of Azure portal showing Frontends tab of 'Create application gateway' page.":::
1. Create a backend pool for the application gateway. Select **Target** as your FQDN of the application deployed in Azure Spring Cloud.
- :::image type="content" source="media/expose-apps-gateway-tls-termination/create-backend-pool.png" alt-text="Azure portal screenshot of 'Add a backend pool' page.":::
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-backend-pool.png" alt-text="Screenshot of Azure portal 'Add a backend pool' page.":::
1. Create a routing rule with HTTP listener. 1. Select the public IP that you created earlier.
Create an application gateway using the following steps to enable SSL terminatio
1. Select the managed identity you created earlier. 1. Select the right key vault and certificate, which were added to the key vault earlier.
- :::image type="content" source="media/expose-apps-gateway-tls-termination/create-routingrule-with-http-listener.png" alt-text="Azure portal screenshot of 'Add a routing rule' page.":::
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-routingrule-with-http-listener.png" alt-text="Screenshot of Azure portal 'Add a routing rule' page.":::
1. Select the **Backend targets** tab.
- :::image type="content" source="media/expose-apps-gateway-tls-termination/create-backend-http-settings.png" alt-text="Azure portal screenshot of 'Add a HTTP setting' page.":::
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-backend-http-settings.png" alt-text="Screenshot of Azure portal 'Add a H T T P setting' page.":::
1. Select **Review and Create** to create the application gateway.
spring-cloud How To Appdynamics Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-appdynamics-java-agent-monitor.md
To activate an application through the Azure portal, use the following steps.
1. Select **Apps** from the **Settings** section of the left navigation pane.
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png" alt-text="Azure portal screenshot showing the Apps section." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png" alt-text="Screenshot of Azure portal showing the Apps section." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png":::
1. Select the application to navigate to the **Overview** page.
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-overview.png" alt-text="Azure portal screenshot the app's Overview page." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-overview.png":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-overview.png" alt-text="Screenshot of Azure portal app overview page." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-overview.png":::
1. Select **Configuration** in the left navigation pane to add, update, or delete the environment variables of the application.
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png" alt-text="Azure portal screenshot showing the 'Environment variables' section of the app's Configuration page." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png" alt-text="Screenshot of Azure portal showing the 'Environment variables' section of the app's Configuration page." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png":::
1. Select **General settings** to add, update, or delete the JVM options of the application.
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png" alt-text="Azure portal screenshot showing the 'General settings' section of the app's Configuration page, with 'JVM options' highlighted." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png" alt-text="Screenshot of Azure portal showing the 'General settings' section of the app's Configuration page, with 'J V M options' highlighted." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png":::
## Automate provisioning
spring-cloud How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-application-insights.md
When the **Application Insights** feature is enabled, you can:
* In the left navigation pane, select **Application Insights** to view the **Overview** page of Application Insights. The **Overview** page will show you an overview of all running applications. * Select **Application Map** to see the status of calls between applications.
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-map.png" alt-text="Azure portal screenshot of Application Insights with Application map page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-map.png":::
+ :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-map.png" alt-text="Screenshot of Azure portal Application Insights with Application map page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-map.png":::
* Select the link between customers-service and `petclinic` to see more details such as a query from SQL. * Select an endpoint to see all the applications making requests to the endpoint. * In the left navigation pane, select **Performance** to see the performance data of all applications' operations, dependencies, and roles.
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-performance.png" alt-text="Azure portal screenshot of Application Insights with Performance page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-performance.png":::
+ :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-performance.png" alt-text="Screenshot of Azure portal Application Insights with Performance page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-performance.png":::
* In the left navigation pane, select **Failures** to see any unexpected failures or exceptions from your applications.
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-failures.png" alt-text="Azure portal screenshot of Application Insights with Failures page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-failures.png":::
+ :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-failures.png" alt-text="Screenshot of Azure portal Application Insights with Failures page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-failures.png":::
* In the left navigation pane, select **Metrics** and select the namespace, you'll see both Spring Boot metrics and custom metrics, if any.
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-metrics.png" alt-text="Azure portal screenshot of Application Insights with Metrics page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-metrics.png":::
+ :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent-metrics.png" alt-text="Screenshot of Azure portal Application Insights with Metrics page showing." lightbox="media/spring-cloud-application-insights/insights-process-agent-metrics.png":::
* In the left navigation pane, select **Live Metrics** to see the real-time metrics for different dimensions.
- :::image type="content" source="media/spring-cloud-application-insights/petclinic-microservices-live-metrics.png" alt-text="Azure portal screenshot of Application Insights with Live Metrics page showing." lightbox="media/spring-cloud-application-insights/petclinic-microservices-live-metrics.png":::
+ :::image type="content" source="media/spring-cloud-application-insights/petclinic-microservices-live-metrics.png" alt-text="Screenshot of Azure portal Application Insights with Live Metrics page showing." lightbox="media/spring-cloud-application-insights/petclinic-microservices-live-metrics.png":::
* In the left navigation pane, select **Availability** to monitor the availability and responsiveness of Web apps by creating [Availability tests in Application Insights](../azure-monitor/app/monitor-web-app-availability.md).
- :::image type="content" source="media/spring-cloud-application-insights/petclinic-microservices-availability.png" alt-text="Azure portal screenshot of Application Insights with Availability page showing." lightbox="media/spring-cloud-application-insights/petclinic-microservices-availability.png":::
+ :::image type="content" source="media/spring-cloud-application-insights/petclinic-microservices-availability.png" alt-text="Screenshot of Azure portal Application Insights with Availability page showing." lightbox="media/spring-cloud-application-insights/petclinic-microservices-availability.png":::
* In the left navigation pane, select **Logs** to view all applications' logs, or one application's logs when filtering by `cloud_RoleName`.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-application-logs.png" alt-text="Azure portal screenshot of Application Insights with Logs page showing." lightbox="media/enterprise/how-to-application-insights/application-insights-application-logs.png":::
+ :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-application-logs.png" alt-text="Screenshot of Azure portal Application Insights with Logs page showing." lightbox="media/enterprise/how-to-application-insights/application-insights-application-logs.png":::
## Manage Application Insights using the Azure portal
Enable the Java In-Process Agent by using the following procedure.
1. Select an existing instance of Application Insights or create a new one. 1. When **Application Insights** is enabled, you can configure one optional sampling rate (default 10.0%).
- :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent.png" alt-text="Azure portal screenshot of Azure Spring Cloud instance with Application Insights page showing and 'Enable Application Insights' checkbox highlighted." lightbox="media/spring-cloud-application-insights/insights-process-agent.png":::
+ :::image type="content" source="media/spring-cloud-application-insights/insights-process-agent.png" alt-text="Screenshot of Azure portal Azure Spring Cloud instance with Application Insights page showing and 'Enable Application Insights' checkbox highlighted." lightbox="media/spring-cloud-application-insights/insights-process-agent.png":::
1. Select **Save** to save the change.
You can use the Portal to check or update the current settings in Application In
1. Select **Application Insights**. 1. Enable Application Insights by selecting **Edit binding**, or the **Unbound** hyperlink.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-binding-enable.png" alt-text="Azure portal screenshot of Azure Spring Cloud instance with Application Insights page showing and drop-down menu visible with 'Edit binding' option.":::
+ :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-binding-enable.png" alt-text="Screenshot of Azure portal Azure Spring Cloud instance with Application Insights page showing and drop-down menu visible with 'Edit binding' option.":::
1. Edit **Application Insights** or **Sampling rate**, then select **Save**.
You can use the Portal to check or update the current settings in Application In
1. Select **Application Insights**. 1. Select **Unbind binding** to disable Application Insights.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-unbind-binding.png" alt-text="Azure portal screenshot of Azure Spring Cloud instance with Application Insights page showing and drop-down menu visible with 'Unbind binding' option.":::
+ :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-unbind-binding.png" alt-text="Screenshot of Azure portal Azure Spring Cloud instance with Application Insights page showing and drop-down menu visible with 'Unbind binding' option.":::
### Change Application Insights Settings Select the name under the *Application Insights* column to open the Application Insights section. ### Edit Application Insights buildpack bindings in Build Service
Application Insights settings are found in the *ApplicationInsights* item listed
1. Select the **Bound** hyperlink, or select **Edit Binding** under the ellipse, to open and edit the Application Insights buildpack bindings.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-builder-settings.png" alt-text="Azure portal screenshot of 'Edit bindings for default builder' pane.":::
+ :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-builder-settings.png" alt-text="Screenshot of Azure portal 'Edit bindings for default builder' pane.":::
1. Edit the binding settings, then select **Save**.
- :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-edit-binding.png" alt-text="Azure portal screenshot of 'Edit binding' pane.":::
+ :::image type="content" source="media/enterprise/how-to-application-insights/application-insights-edit-binding.png" alt-text="Screenshot of Azure portal 'Edit binding' pane.":::
::: zone-end
spring-cloud How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-config-server.md
The following table shows some examples for the **Additional repositories** sect
| *test-config-server-app-1/dev* | The pattern and repository URI will match a Spring boot application named `test-config-server-app-1` with dev profile. | | *test-config-server-app-2/prod* | The pattern and repository URI will match a Spring boot application named `test-config-server-app-2` with prod profile. | ## Attach your Config Server repository to Azure Spring Cloud
spring-cloud How To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-custom-persistent-storage.md
Use the following steps to bind an Azure Storage account as a storage resource i
| Account name | The name of the storage account. | | Account key | The storage account key. |
- :::image type="content" source="media/how-to-custom-persistent-storage/add-storage-resource.png" alt-text="Azure portal screenshot showing the Storage page and the 'Add storage' pane" lightbox="media/how-to-custom-persistent-storage/add-storage-resource.png":::
+ :::image type="content" source="media/how-to-custom-persistent-storage/add-storage-resource.png" alt-text="Screenshot of Azure portal showing the Storage page and the 'Add storage' pane." lightbox="media/how-to-custom-persistent-storage/add-storage-resource.png":::
1. Go to the **Apps** page, then select an application to mount the persistent storage.
- :::image type="content" source="media/how-to-custom-persistent-storage/select-app-mount-persistent-storage.png" alt-text="Screenshot of the Apps page" lightbox="media/how-to-custom-persistent-storage/select-app-mount-persistent-storage.png":::
+ :::image type="content" source="media/how-to-custom-persistent-storage/select-app-mount-persistent-storage.png" alt-text="Screenshot of Azure portal Apps page." lightbox="media/how-to-custom-persistent-storage/select-app-mount-persistent-storage.png":::
1. Select **Configuration**, then select **Persistent Storage**.
Use the following steps to bind an Azure Storage account as a storage resource i
| Mount options | Optional | | Read only | Optional |
- :::image type="content" source="media/how-to-custom-persistent-storage/add-persistent-storage.png" alt-text="Screenshot of the 'Add persistent storage' form":::
+ :::image type="content" source="media/how-to-custom-persistent-storage/add-persistent-storage.png" alt-text="Screenshot of Azure portal 'Add persistent storage' form.":::
1. Select **Save** to apply all the configuration changes.
- :::image type="content" source="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png" alt-text="Screenshot of the Persistent Storage section of the Configuration page" lightbox="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png":::
+ :::image type="content" source="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png" alt-text="Screenshot of Azure portal Persistent Storage section of the Configuration page." lightbox="media/how-to-custom-persistent-storage/save-persistent-storage-changes.png":::
# [CLI](#tab/Azure-CLI)
spring-cloud How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-deploy-with-custom-container-image.md
To disable listening on a port for images that aren't web applications, add the
1. Select **Apps** from left the menu, then select **Create App**. 1. Name your app, and in the **Runtime platform** pulldown list, select **Custom Container**.
- :::image type="content" source="media/how-to-deploy-with-custom-container-image/create-app-custom-container.png" alt-text="Azure portal screenshot of Create App page with Runtime platform dropdown showing and Custom Container selected." lightbox="media/how-to-deploy-with-custom-container-image/create-app-custom-container.png":::
+ :::image type="content" source="media/how-to-deploy-with-custom-container-image/create-app-custom-container.png" alt-text="Screenshot of Azure portal Create App page with Runtime platform dropdown showing and Custom Container selected." lightbox="media/how-to-deploy-with-custom-container-image/create-app-custom-container.png":::
1. Select **Edit** under *Image*, then fill in the fields as shown in the following image:
- :::image type="content" source="media/how-to-deploy-with-custom-container-image/custom-image-settings.png" alt-text="Azure portal screenshot showing the Custom Image Settings pane." lightbox="media/how-to-deploy-with-custom-container-image/custom-image-settings.png":::
+ :::image type="content" source="media/how-to-deploy-with-custom-container-image/custom-image-settings.png" alt-text="Screenshot of Azure portal showing the Custom Image Settings pane." lightbox="media/how-to-deploy-with-custom-container-image/custom-image-settings.png":::
> [!NOTE] > The **Commands** and **Arguments** field are optional, which are used to overwrite the `cmd` and `entrypoint` of the image.
spring-cloud How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-dynatrace-one-agent-monitor.md
Next, go to the **Profiling and optimization** section.
You can find the **CPU analysis** from **Profiling and optimization/CPU analysis**: Next, go to the **Databases** section.
spring-cloud How To Elastic Apm Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-elastic-apm-java-agent-monitor.md
Before proceeding, you'll need your Elastic APM server connectivity information
1. In the Azure portal, go to the **Overview** page of your Elastic deployment, then select **Manage Elastic Cloud Deployment**.
- :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-link-from-microsoft-azure.png" alt-text="Azure portal screenshot of 'Elasticsearch (Elastic Cloud)' page." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-link-from-microsoft-azure.png":::
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-link-from-microsoft-azure.png" alt-text="Screenshot of Azure portal 'Elasticsearch (Elastic Cloud)' page." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-link-from-microsoft-azure.png":::
1. Under your deployment on Elastic Cloud Console, select the **APM & Fleet** section to get Elastic APM Server endpoint and secret token.
- :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-endpoint-secret.png" alt-text="Elastic screenshot 'APM & Fleet' page." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-endpoint-secret.png":::
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-endpoint-secret.png" alt-text="Elastic screenshot 'A P M & Fleet' page." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-endpoint-secret.png":::
1. Download Elastic APM Java Agent from [Maven Central](https://search.maven.org/search?q=g:co.elastic.apm%20AND%20a:elastic-apm-agent).
Before proceeding, you'll need your Elastic APM server connectivity information
1. Upload Elastic APM Agent to the custom persistent storage you enabled earlier. Go to Azure Fileshare and select **Upload** to add the agent JAR file.
- :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/upload-files-microsoft-azure.png" alt-text="Azure portal screenshot showing 'Upload files' pane of 'File share' page." lightbox="media/how-to-elastic-apm-java-agent-monitor/upload-files-microsoft-azure.png":::
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/upload-files-microsoft-azure.png" alt-text="Screenshot of Azure portal showing 'Upload files' pane of 'File share' page." lightbox="media/how-to-elastic-apm-java-agent-monitor/upload-files-microsoft-azure.png":::
1. After you have the Elastic APM endpoint and secret token, use the following command to activate Elastic APM Java agent when deploying applications. The placeholder *`<agent-location>`* refers to the mounted storage location of the Elastic APM Java Agent.
Use the following steps to monitor applications and metrics:
1. In the Azure portal, go to the **Overview** page of your Elastic deployment, then select the Kibana link.
- :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-kibana-link.png" alt-text="Azure portal screenshot showing Elasticsearch page with 'Deployment URL / Kibana' highlighted." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-kibana-link.png":::
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-kibana-link.png" alt-text="Screenshot of Azure portal showing Elasticsearch page with 'Deployment U R L / Kibana' highlighted." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-get-kibana-link.png":::
1. After Kibana is open, search for *APM* in the search bar, then select **APM**.
- :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-kibana-search-apm.png" alt-text="Elastic / Kibana screenshot showing APM search results." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-kibana-search-apm.png":::
+ :::image type="content" source="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-kibana-search-apm.png" alt-text="Elastic / Kibana screenshot showing A P M search results." lightbox="media/how-to-elastic-apm-java-agent-monitor/elastic-apm-kibana-search-apm.png":::
Kibana APM is the curated application to support Application Monitoring workflows. Here you can view high-level details such as request/response times, throughput, transactions in a service with most impact on the duration. You can drill down in a specific transaction to understand the transaction-specific details such as the distributed tracing. Elastic APM Java agent also captures the JVM metrics from the Azure Spring Cloud apps that are available with Kibana App for users for troubleshooting. Using the inbuilt AI engine in the Elastic solution, you can also enable Anomaly Detection on the Azure Spring Cloud Services and choose an appropriate action - such as Teams notification, creation of a JIRA issue, a webhook-based API call, and others. ## Next steps
spring-cloud How To Elastic Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-elastic-diagnostic-settings.md
To configure diagnostics settings, use the following steps:
1. Enter a name for the setting, choose **Send to partner solution**, then select **Elastic** and an Elastic deployment where you want to send the logs. 1. Select **Save**. > [!NOTE] > There might be a gap of up to 15 minutes between when logs are emitted and when they appear in your Elastic deployment.
Use the following steps to analyze the logs:
1. From the Elastic deployment overview page in the Azure portal, open **Kibana**.
- :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-on-azure-native-microsoft-azure.png" alt-text="Azure portal screenshot showing 'Elasticsearch (Elastic Cloud)' page with Deployment URL / Kibana highlighted." lightbox="media/how-to-elastic-diagnostic-settings/elastic-on-azure-native-microsoft-azure.png":::
+ :::image type="content" source="media/how-to-elastic-diagnostic-settings/elastic-on-azure-native-microsoft-azure.png" alt-text="Screenshot of Azure portal showing 'Elasticsearch (Elastic Cloud)' page with Deployment U R L / Kibana highlighted." lightbox="media/how-to-elastic-diagnostic-settings/elastic-on-azure-native-microsoft-azure.png":::
1. In Kibana, in the **Search** bar at top, type *Spring Cloud type:dashboard*.
spring-cloud How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enable-system-assigned-managed-identity.md
To set up a managed identity in the portal, first create an app, and then enable
3. Select **Identity**. 4. Within the **System assigned** tab, switch **Status** to *On*. Select **Save**. ### [Azure CLI](#tab/azure-cli)
To remove system-assigned managed identity from an app that no longer needs it:
1. Navigate to the desired application and select **Identity**. 1. Under **System assigned**/**Status**, select **Off** and then select **Save**: ### [Azure CLI](#tab/azure-cli)
spring-cloud How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-build-service.md
In Azure Spring Cloud, the existing Standard tier already supports compiling use
Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps. You can configure the number of resources given to the build agent pool during or after creating a new service instance of Azure Spring Cloud using the **VMware Tanzu settings**. The Build Agent Pool scale set sizes available are:
The Build Agent Pool scale set sizes available are:
| S4 | 5 vCPU, 10 Gi | | S5 | 6 vCPU, 12 Gi |
-The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance.
+The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size.
## Default Builder and Tanzu Buildpacks
Besides the `default` builder, you can also create custom builders with the prov
All the builders configured in a Spring Cloud Service instance are listed in the **Build Service** section under **VMware Tanzu components**. Select **Add** to create a new builder. The image below shows the resources you should use to create the customized builder. You can also edit a custom builder. You can update the buildpacks or the [OS Stack](https://docs.pivotal.io/tanzu-buildpacks/stacks.html), but the builder name is read only. You can delete any custom builder, but the `default` builder is read only.
az spring-cloud app deploy \
If the builder isn't specified, the `default` builder will be used.
+You can also configure the build environment and build resources by using the following command:
+
+```azurecli
+az spring-cloud app deploy \
+ --name <app-name> \
+ --build-env <key1=value1>, <key2=value2> \
+ --build-cpu <build-cpu-size> \
+ --build-memory <build-memory-size> \
+ --builder <builder-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
+
+If you're using the `tanzu-buildpacks/java-azure` buildpack, we recommend that you set the `BP_JVM_VERSION` environment variable in the `build-env` argument.
+ ## Real-time build logs A build task will be triggered when an app is deployed from an Azure CLI command. Build logs are streamed in real time as part of the CLI command output. For information on using build logs to diagnose problems, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md) .
spring-cloud How To Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-move-across-regions.md
After you modify the template, use the following steps to deploy the template an
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the top search box, search for *Deploy a custom template*.
- :::image type="content" source="media/how-to-move-across-regions/search-deploy-template.png" alt-text="Azure portal screenshot showing search results." lightbox="media/how-to-move-across-regions/search-deploy-template.png" border="true":::
+ :::image type="content" source="media/how-to-move-across-regions/search-deploy-template.png" alt-text="Screenshot of Azure portal showing search results." lightbox="media/how-to-move-across-regions/search-deploy-template.png" border="true":::
1. Under **Services**, select **Deploy a custom template**. 1. Go to the **Select a template** tab, then select **Build your own template in the editor**.
After you modify the template, use the following steps to deploy the template an
- The target region. - Any other parameters required for the template.
- :::image type="content" source="media/how-to-move-across-regions/deploy-template.png" alt-text="Azure portal screenshot showing 'Custom deployment' pane.":::
+ :::image type="content" source="media/how-to-move-across-regions/deploy-template.png" alt-text="Screenshot of Azure portal showing 'Custom deployment' pane.":::
1. Select **Review + create** to create the target service instance. 1. Wait until the template has deployed successfully. If the deployment fails, select **Deployment details** to view the failure reason, then update the template or configurations accordingly.
spring-cloud How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-permissions.md
The Developer role includes permissions to restart apps and see their log stream
* **Read : Read operation status**
- [![Azure portal screenshot that shows the selections for Developer permissions.](media/spring-cloud-permissions/developer-permissions-box.png)](media/spring-cloud-permissions/developer-permissions-box.png#lightbox)
+ [![Screenshot of Azure portal that shows the selections for Developer permissions.](media/spring-cloud-permissions/developer-permissions-box.png)](media/spring-cloud-permissions/developer-permissions-box.png#lightbox)
9. Select **Add**.
This procedure defines a role that has permissions to deploy, test, and restart
* **Read : List available skus**
- [![Azure portal screenshot that shows the selections for DevOps permissions.](media/spring-cloud-permissions/dev-ops-permissions.png)](media/spring-cloud-permissions/dev-ops-permissions.png#lightbox)
+ [![Screenshot of Azure portal that shows the selections for DevOps permissions.](media/spring-cloud-permissions/dev-ops-permissions.png)](media/spring-cloud-permissions/dev-ops-permissions.png#lightbox)
3. Select **Add**.
This procedure defines a role that has permissions to deploy, test, and restart
* **Read : Read operation status**
- [![Azure portal screenshot that shows the selections for Ops - Site Reliability Engineering permissions.](media/spring-cloud-permissions/ops-sre-permissions.png)](media/spring-cloud-permissions/ops-sre-permissions.png#lightbox)
+ [![Screenshot of Azure portal that shows the selections for Ops - Site Reliability Engineering permissions.](media/spring-cloud-permissions/ops-sre-permissions.png)](media/spring-cloud-permissions/ops-sre-permissions.png#lightbox)
3. Select **Add**.
This role can create and configure everything in Azure Spring Cloud and apps wit
* **Read : List available skus**
- [![Azure portal screenshot that shows the selections for Azure Pipelines / Jenkins / GitHub Actions permissions.](media/spring-cloud-permissions/pipelines-permissions-box.png)](media/spring-cloud-permissions/pipelines-permissions-box.png#lightbox)
+ [![Screenshot of Azure portal that shows the selections for Azure Pipelines / Jenkins / GitHub Actions permissions.](media/spring-cloud-permissions/pipelines-permissions-box.png)](media/spring-cloud-permissions/pipelines-permissions-box.png#lightbox)
4. Select **Add**.
spring-cloud How To Setup Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-setup-autoscale.md
To follow these procedures, you need:
6. Go to the **Scale out** tab under **Settings** in the menu on the left navigation pane. 7. Select the deployment you want to set up Autoscale. The options for Autoscale are described in the following section.
-![Azure portal screenshot of **Scale out** page with `demo/default` deployment indicated.](./media/spring-cloud-autoscale/autoscale-menu.png)
+![Screenshot of Azure portal **Scale out** page with `demo/default` deployment indicated.](./media/spring-cloud-autoscale/autoscale-menu.png)
## Set up Autoscale settings for your application in the Azure portal
There are two options for Autoscale demand management:
In the Azure portal, choose how you want to scale. The following figure shows the **Custom autoscale** option and mode settings. ## Set up Autoscale settings for your application in Azure CLI
spring-cloud How To Start Stop Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-start-stop-service.md
In the Azure portal, use the following steps to stop a running Azure Spring Clou
1. Go to the Azure Spring Cloud service overview page. 2. Select **Stop** to stop a running instance.
- :::image type="content" source="media/stop-start-service/spring-cloud-stop-service.png" alt-text="Azure portal screenshot showing the Azure Spring Cloud Overview page with the Stop button and Status value highlighted":::
+ :::image type="content" source="media/stop-start-service/spring-cloud-stop-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Cloud Overview page with the Stop button and Status value highlighted.":::
3. After the instance stops, the status will show **Succeeded (Stopped)**.
In the Azure portal, use the following steps to start a stopped Azure Spring Clo
1. Go to Azure Spring Cloud service overview page. 2. Select **Start** to start a stopped instance.
- :::image type="content" source="media/stop-start-service/spring-cloud-start-service.png" alt-text="Azure portal screenshot showing the Azure Spring Cloud Overview page with the Start button and Status value highlighted":::
+ :::image type="content" source="media/stop-start-service/spring-cloud-start-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Cloud Overview page with the Start button and Status value highlighted.":::
3. After the instance starts, the status will show **Succeeded (Running)**.
spring-cloud How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-enterprise-api-portal.md
az spring-cloud api-portal update --assign-endpoint
Select the `endpoint URL` to go to API portal. You'll see all the routes configured in Spring Cloud Gateway for Tanzu. ## Try APIs using API portal
Select the `endpoint URL` to go to API portal. You'll see all the routes configu
1. Select the API you would like to try. 1. Select **EXECUTE** and the response will be shown.
- :::image type="content" source="media/enterprise/how-to-use-enterprise-api-portal/api-portal-tryout.png" alt-text="Screenshot of API portal.":::
+ :::image type="content" source="media/enterprise/how-to-use-enterprise-api-portal/api-portal-tryout.png" alt-text="Screenshot of A P I portal.":::
## Next steps
spring-cloud How To Use Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-enterprise-spring-cloud-gateway.md
Use the following steps to create an example application using Spring Cloud Gate
Select **Yes** next to *Assign endpoint* to assign a public endpoint. You'll get a URL in a few minutes. Save the URL to use later.
- :::image type="content" source="media/enterprise/getting-started-enterprise/gateway-overview.png" alt-text="Azure portal screenshot of Azure Spring Cloud overview page with 'Assign endpoint' highlighted.":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/gateway-overview.png" alt-text="Screenshot of Azure portal Azure Spring Cloud overview page with 'Assign endpoint' highlighted.":::
You can also use CLI to do it, as shown in the following command:
Use the following steps to create an example application using Spring Cloud Gate
You can also view those properties in the portal.
- :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png" alt-text="Azure portal screenshot showing Azure Spring Cloud Spring Cloud Gateway page with Configuration pane showing.":::
+ :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-configuration.png" alt-text="Screenshot of Azure portal showing Azure Spring Cloud Spring Cloud Gateway page with Configuration pane showing.":::
1. Configure routing rules to apps.
Use the following steps to create an example application using Spring Cloud Gate
You can also view the routes in the portal.
- :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png" alt-text="Azure portal screenshot of Azure Spring Cloud Spring Cloud Gateway page showing 'Routing rules' pane.":::
+ :::image type="content" source="media/enterprise/how-to-use-enterprise-spring-cloud-gateway/gateway-route.png" alt-text="Screenshot of Azure portal Azure Spring Cloud Spring Cloud Gateway page showing 'Routing rules' pane.":::
1. Use the following command to access the `customers service` and `owners` APIs through the gateway endpoint:
spring-cloud How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-tls-certificate.md
You need to grant Azure Spring Cloud access to your key vault before you import
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Key vaults**, then select the Key Vault you'll import your certificate from.
-1. In the left navigation pane, select **Access policies**, then select **+ Add Access Policies**.
+1. In the left navigation pane, select **Access policies**, then select **Create**.
1. Select **Certificate permissions**, then select **Get** and **List**.
-1. Under **Select Principal**, select **None selected**, then select your security principal.
-1. Select **Select**, then select **Add**.
+ :::image type="content" source="media/use-tls-certificates/grant-key-vault-permission.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Permission pane showing and Get and List permissions highlighted." lightbox="media/use-tls-certificates/grant-key-vault-permission.png":::
-Once you grant access to your key vault, you can import your certificate using these steps:
+1. Under **Principal**, select your **Azure Spring Cloud Resource Provider**.
+
+ :::image type="content" source="media/use-tls-certificates/select-service-principal.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Principal pane showing and Azure Spring Cloud Resource Provider highlighted." lightbox="media/use-tls-certificates/select-service-principal.png":::
+
+1. Select **Review + Create**, then select **Create**.
+
+After you grant access to your key vault, you can import your certificate using these steps:
1. Go to your service instance. 1. From the left navigation pane of your instance, select **TLS/SSL settings**.
To load a certificate into your application in Azure Spring Cloud, start with th
1. From the left navigation pane of your app, select **Certificate management**. 1. Select **Add certificate** to choose certificates accessible for the app. ### Load a certificate from code
For a Java application, you can choose **Load into trust store** for the selecte
The following log from your app shows that the certificate is successfully loaded.
-```
+```output
Load certificate from specific path. alias = <certificate alias>, thumbprint = <certificate thumbprint>, file = <certificate name> ``` ## Next steps
-* [Enable ingress-to-app Transport Layer Security](./how-to-enable-ingress-to-app-tls.md)
-* [Access Config Server and Service Registry](./how-to-access-data-plane-azure-ad-rbac.md)
+- [Enable ingress-to-app Transport Layer Security](./how-to-enable-ingress-to-app-tls.md)
+- [Access Config Server and Service Registry](./how-to-access-data-plane-azure-ad-rbac.md)
spring-cloud Monitor App Lifecycle Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/monitor-app-lifecycle-events.md
Azure Spring Cloud provides built-in tools to monitor the status and health of y
For example, when you restart your app, you can find the affected instances from the **Activity log** detail page in the Azure portal. ## Monitor app lifecycle events in Azure Service Health
For example, when you restart your app, you can find the affected instances from
When your app is restarted because of unplanned events, your Azure Spring Cloud instance will show a status of **degraded** in the **Resource health** section of the Azure portal. Degraded means that your resource detected a potential loss in performance, although it's still available for use. Examples of unplanned events include app crash, health check failure, and system outage. You can find the latest status, the root cause, and affected instances in the health history page. - ### Monitor planned app lifecycle events Your app may be restarted during platform maintenance. You can receive a maintenance notification in advance from the **Planned maintenance** page of Azure Service Health. When platform maintenance happens, your Azure Spring Cloud instance will also show a status of **degraded**. If restarting is needed during platform maintenance, Azure Spring Cloud will perform a rolling update to incrementally update your applications. Rolling updates are designed to update your workloads without downtime. You can find the latest status in the health history page. >[!NOTE] > Currently, Azure Spring Cloud performs one regular planned maintenance to upgrade the underlying Kubernetes version every 2-4 months. For a detailed maintenance timeline, check the notifications on the Azure Service Health page.
The following steps show you how to create an activity log alert rule in the Azu
1. Navigate to **Activity log**, open the detail page for any activity log, then select **New alert rule**.
- :::image type="content" source="media/monitor-app-lifecycle-events/activity-log-alert.png" lightbox="media/monitor-app-lifecycle-events/activity-log-alert.png" alt-text="Screenshot of an activity log alert":::
+ :::image type="content" source="media/monitor-app-lifecycle-events/activity-log-alert.png" lightbox="media/monitor-app-lifecycle-events/activity-log-alert.png" alt-text="Screenshot of Azure portal activity log alert.":::
2. Select the **Scope** for the alert. 3. Specify the alert **Condition**.
- :::image type="content" source="media/monitor-app-lifecycle-events/activity-log-alert-condition.png" lightbox="media/monitor-app-lifecycle-events/activity-log-alert-condition.png" alt-text="Screenshot of an activity log alert condition":::
+ :::image type="content" source="media/monitor-app-lifecycle-events/activity-log-alert-condition.png" lightbox="media/monitor-app-lifecycle-events/activity-log-alert-condition.png" alt-text="Screenshot of Azure portal activity log alert condition.":::
4. Select **Actions** and add **Alert rule details**.
The following steps show you how to create an alert rule for service health noti
1. Navigate to **Resource health** under **Service Health**, then select **Add resource health alert**.
- :::image type="content" source="media/monitor-app-lifecycle-events/add-resource-health-alert.png" alt-text="Screenshot of the resource health pane with the 'Add resource health alert' button highlighted":::
+ :::image type="content" source="media/monitor-app-lifecycle-events/add-resource-health-alert.png" alt-text="Screenshot of Azure portal resource health pane with the 'Add resource health alert' button highlighted.":::
2. Select the **Resource** for the alert.
- :::image type="content" source="media/monitor-app-lifecycle-events/resource-health-alert-target.png" alt-text="Screenshot of a resource health alert target":::
+ :::image type="content" source="media/monitor-app-lifecycle-events/resource-health-alert-target.png" alt-text="Screenshot of Azure portal resource health alert target.":::
3. Specify the **Alert condition**.
- :::image type="content" source="media/monitor-app-lifecycle-events/resource-health-alert-condition.png" alt-text="Screenshot of a resource health alert condition":::
+ :::image type="content" source="media/monitor-app-lifecycle-events/resource-health-alert-condition.png" alt-text="Screenshot of Azure portal resource health alert condition.":::
4. Select the **Actions** and add **Alert rule details**.
The following steps show you how to create an alert rule for planned maintenance
1. Navigate to **Health alerts** under **Service Health**, then select **Add service health alert**.
- :::image type="content" source="media/monitor-app-lifecycle-events/add-service-health-alert.png" alt-text="Screenshot of the health alerts pane with the 'Add service health alert' button highlighted":::
+ :::image type="content" source="media/monitor-app-lifecycle-events/add-service-health-alert.png" alt-text="Screenshot of Azure portal health alerts pane with the 'Add service health alert' button highlighted.":::
2. Provide values for **Subscription**, **Service(s)**, **Region(s)**, **Event type**, **Actions**, and **Alert rule details**.
- :::image type="content" source="media/monitor-app-lifecycle-events/add-service-health-alert-details.png" lightbox="media/monitor-app-lifecycle-events/add-service-health-alert-details.png" alt-text="Screenshot of the 'Create rule alert' pane for Service Health":::
+ :::image type="content" source="media/monitor-app-lifecycle-events/add-service-health-alert-details.png" lightbox="media/monitor-app-lifecycle-events/add-service-health-alert-details.png" alt-text="Screenshot of Azure portal 'Create rule alert' pane for Service Health.":::
3. Select **Create alert rule**.
spring-cloud Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-apps-enterprise.md
To bind apps to Application Configuration Service for VMware Tanzu®, follow the
1. Select **App binding**, then select **Bind app**. 1. Choose one app in the dropdown and select **Apply** to bind the application to Application Configuration Service for Tanzu.
- ![Azure portal screenshot of Azure Spring Cloud with Application Configuration Service page and 'App binding' section with 'Bind app' dialog showing.](./media/enterprise/getting-started-enterprise/config-service-app-bind-dropdown.png)
+ ![Screenshot of Azure portal Azure Spring Cloud with Application Configuration Service page and 'App binding' section with 'Bind app' dialog showing.](./media/enterprise/getting-started-enterprise/config-service-app-bind-dropdown.png)
A list under **App name** shows the apps bound with Application Configuration Service for Tanzu, as shown in the following screenshot:
-![Azure portal screenshot of Azure Spring Cloud with Application Configuration Service page and 'App binding' section with app list showing.](./media/enterprise/getting-started-enterprise/config-service-app-bind.png)
+![Screenshot of Azure portal Azure Spring Cloud with Application Configuration Service page and 'App binding' section with app list showing.](./media/enterprise/getting-started-enterprise/config-service-app-bind.png)
To bind apps to VMware Tanzu® Service Registry, follow these steps.
To bind apps to VMware Tanzu® Service Registry, follow these steps.
1. Select **App binding**, then select **Bind app**. 1. Choose one app in the dropdown, and then select **Apply** to bind the application to Tanzu Service Registry.
- :::image type="content" source="media/enterprise/getting-started-enterprise/service-reg-app-bind-dropdown.png" alt-text="Azure portal screenshot of Azure Spring Cloud with Service Registry page and 'Bind app' dialog showing.":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/service-reg-app-bind-dropdown.png" alt-text="Screenshot of Azure portal Azure Spring Cloud with Service Registry page and 'Bind app' dialog showing.":::
A list under **App name** shows the apps bound with Tanzu Service Registry, as shown in the following screenshot: ### [Azure CLI](#tab/azure-cli)
spring-cloud Quickstart Provision Service Instance Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-provision-service-instance-enterprise.md
Use the following steps to provision an Azure Spring Cloud service instance:
1. On the Azure Spring Cloud **Create** page, select **Change** next to the **Pricing** option, then select the **Enterprise** tier.
- :::image type="content" source="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png" alt-text="Azure portal screenshot of Azure Spring Cloud creation page with Basics section and 'Choose your pricing tier' pane showing." lightbox="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png" alt-text="Screenshot of Azure portal Azure Spring Cloud creation page with Basics section and 'Choose your pricing tier' pane showing." lightbox="media/enterprise/getting-started-enterprise/choose-enterprise-tier.png":::
Select the **Terms** checkbox to agree to the legal terms and privacy statements of the Enterprise tier offering in the Azure Marketplace.
Use the following steps to provision an Azure Spring Cloud service instance:
> [!NOTE] > All Tanzu components are enabled by default. Be sure to carefully consider which Tanzu components you want to use or enable during the provisioning phase. After provisioning the Azure Spring Cloud instance, you can't enable or disable Tanzu components.
- :::image type="content" source="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png" alt-text="Azure portal screenshot of Azure Spring Cloud creation page with V M ware Tanzu Settings section showing." lightbox="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png" alt-text="Screenshot of Azure portal Azure Spring Cloud creation page with V M ware Tanzu Settings section showing." lightbox="media/enterprise/getting-started-enterprise/create-instance-tanzu-settings-public-preview.png":::
1. Select the **Application Insights** section, then select **Enable Application Insights**. You can also enable Application Insights after you provision the Azure Spring Cloud instance.
Use the following steps to provision an Azure Spring Cloud service instance:
> [!NOTE] > You'll pay for the usage of Application Insights when integrated with Azure Spring Cloud. For more information about Application Insights pricing, see [Application Insights billing](../azure-monitor/logs/cost-logs.md#application-insights-billing).
- :::image type="content" source="media/enterprise/getting-started-enterprise/application-insights.png" alt-text="Azure portal screenshot of Azure Spring Cloud creation page with Application Insights section showing." lightbox="media/enterprise/getting-started-enterprise/application-insights.png":::
+ :::image type="content" source="media/enterprise/getting-started-enterprise/application-insights.png" alt-text="Screenshot of Azure portal Azure Spring Cloud creation page with Application Insights section showing." lightbox="media/enterprise/getting-started-enterprise/application-insights.png":::
1. Select **Review and create**. After validation completes successfully, select **Create** to start provisioning the service instance.
spring-cloud Quickstart Sample App Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-sample-app-introduction.md
The sample app is composed of two Spring apps:
The following diagram illustrates the sample app architecture: > [!NOTE] > When the application is hosted in Azure Spring Cloud Enterprise tier, the managed Application Configuration Service for VMware Tanzu® assumes the role of Spring Cloud Config Server and the managed VMware Tanzu® Service Registry assumes the role of Eureka Service Discovery without any code changes to the application. For more information, see [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md) and [Use Tanzu Service Registry](how-to-enterprise-service-registry.md).
spring-cloud Quickstart Setup Application Configuration Service Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-setup-application-configuration-service-enterprise.md
To use Application Configuration Service for Tanzu, follow these steps.
1. Select **Application Configuration Service**. 1. Select **Overview** to view the running state and resources allocated to Application Configuration Service for Tanzu.
- ![Azure portal screenshot of Azure Spring Cloud with Application Configuration Service page and Overview section showing.](./media/enterprise/getting-started-enterprise/config-service-overview.png)
+ ![Screenshot of Azure portal Azure Spring Cloud with Application Configuration Service page and Overview section showing.](./media/enterprise/getting-started-enterprise/config-service-overview.png)
1. Select **Settings** and add a new entry in the **Repositories** section with the following information:
To use Application Configuration Service for Tanzu, follow these steps.
1. Select **Validate** to validate access to the target URI. After validation completes successfully, select **Apply** to update the configuration settings.
- ![Azure portal screenshot of Azure Spring Cloud with Application Configuration Service page and Settings section showing.](./media/enterprise/getting-started-enterprise/config-service-settings.png)
+ ![Screenshot of Azure portal Azure Spring Cloud with Application Configuration Service page and Settings section showing.](./media/enterprise/getting-started-enterprise/config-service-settings.png)
### [Azure CLI](#tab/azure-cli)
spring-cloud Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/tutorial-managed-identities-key-vault.md
export SERVICE_IDENTITY=$(az spring-cloud app show --name "springapp" -s "myspri
First, create a user-assigned managed identity in advance with its resource ID set to `$USER_IDENTITY_RESOURCE_ID`. ```azurecli export SERVICE_IDENTITY={principal ID of user-assigned managed identity}
static-web-apps Front Door Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/front-door-manual.md
In this tutorial, you learn how to:
1. Select **Go to resource**. ## Disable cache for auth workflow
+> [!NOTE]
+>The cache expiration, cache key query string and origin group override actions are deprecated. These deprecated actions can still work normally, but your rule set >cannot be changed. You need to replace them with new route configuration override action before changing your rule set.
Add the following settings to disable Front Door's caching policies from trying to cache authentication and authorization-related pages.
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
The following Azure File Sync agent versions are supported:
| Milestone | Agent version number | Release date | Status | |-|-|--||
-| V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported - Flighting |
+| V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported |
| V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported | | V14 Release - [KB5001872](https://support.microsoft.com/topic/92290aa1-75de-400f-9442-499c44c92a81)| 14.0.0.0 | October 29, 2021 | Supported |
-| V13 Release - [KB4588753](https://support.microsoft.com/topic/632fb833-42ed-4e4d-8abd-746bd01c1064)| 13.0.0.0 | July 12, 2021 | Supported |
+| V13 Release - [KB4588753](https://support.microsoft.com/topic/632fb833-42ed-4e4d-8abd-746bd01c1064)| 13.0.0.0 | July 12, 2021 | Supported - Agent version expires on August 8, 2022 |
| V12.1 Release - [KB4588751](https://support.microsoft.com/topic/497dc33c-d38b-42ca-8015-01c906b96132)| 12.1.0.0 | May 20, 2021 | Supported - Agent version expires on May 23, 2022 | | V12 Release - [KB4568585](https://support.microsoft.com/topic/b9605f04-b4af-4ad8-86b0-2c490c535cfd)| 12.0.0.0 | March 26, 2021 | Supported - Agent version expires on May 23, 2022 |
storage Tiger Bridge Cdp Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/tiger-bridge-cdp-guide.md
+
+ Title: Deploy Tiger Bridge Continuous Data Protection, Archive and Disaster Recovery with Azure Blob Storage
+
+description: Deployment, and configuration guide for running Tiger Bridge in ContinuousData Protection, Archive and Disaster Recovery configurations.
++ Last updated : 04/05/2022+++++
+# Tiger Bridge archiving with continuous data protection and disaster recovery
+
+This article will guide you to set up Tiger Bridge data management system with Azure Blob Storage. Tiger Bridge Continuous data protection (CDP) integrates with [Soft Delete](/azure/storage/blobs/soft-delete-blob-overview) and [Versioning](/azure/storage/blobs/versioning-overview) to achieve a complete Continuous Data Protection solution. It applies policies to move data between [Azure Blob tiers](/azure/storage/blobs/access-tiers-overview) for optimal cost. Continuous data protection allows customers to have a real-time file-based backup with snapshots to achieve near zero RPO. CDP enables customers to protect their assets with minimum resources. Optionally, it can be used in WORM scenario using [immutable storage](/azure/storage/blobs/immutable-storage-overview).
+In addition, Tiger Bridge provides easy and efficient Disaster Recovery. It can be combined with [Microsoft DFSR](/windows-server/storage/dfs-replication/dfsr-overview), but it isn't mandatory. It allows mirrored DR sites, or can be used with minimum storage DR sites (keeping only the most recent data on-prem plus).
+All the replicated files in Azure Blob Storage are stored as native objects, allowing the organization to access them without using Tiger Bridge. This approach prevents vendor locking.
+
+## Reference architecture
++
+More information on Tiger Bridge solution, and common use case can be read in [Tiger Bridge deployment guide](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide).
+
+## Before you begin
+
+- **Refer to [Tiger Bridge deployment guide](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide)**, it describes initial steps needed for setting up CDP.
+
+- **Choose the right storage options**. When you use Azure as a backup target, you'll make use of [Azure Blob storage](https://azure.microsoft.com/services/storage/blobs/). Blob storage is optimized for storing massive amounts of unstructured data, which is data that doesn't adhere to any data model, or definition. It's durable, highly available, secure, and scalable. You can select the right storage for your workload by looking at two aspects:
+ - [Storage redundancy](/azure/storage/common/storage-redundancy)
+ - [Storage tier](/azure/storage/blobs/access-tiers-overview)
+
+### Sample backup to Azure cost model
+Subscription based model can be daunting to customers who are new to the cloud. While you pay for only the capacity used, you do also pay for transactions (read and write), and egress for data read back to your on-premises environment (depending on the network connection used). We recommend using the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to perform what-if analysis. You can base the analysis on list pricing or on Azure Storage Reserved Capacity pricing, which can deliver up to 38% savings. Below is an example pricing exercise to model the monthly cost of backing up to Azure.
+
+| Cost factor | Monthly cost |
+| -- | |
+| 100 TiB of backup on cool storage | $1556.48 |
+| 2 TiB of new data written per day | $39 |
+| Monthly estimated total | $1595.48 |
+| One time restore of 5 TiB over public internet | $491.26 |
+
+> [!NOTE]
+> This is only an example. Your pricing may vary due to activities not captured here. Estimate was generated with Azure Pricing Calculator using East US Pay-as-you-go pricing. It is based on a 32 MB block size which generates 65,536 PUT Requests (write transactions), per day. This example may not reflect current Azure pricing, or not be applicable towards your requirements.
+
+## Prepare Azure Blob Storage
+Refer to [Tiger Bridge deployment guide](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide)
+
+## Deploy Tiger Bridge
+Before you can install Tiger Bridge, you need to have a Windows file server installed, and fully functional. Windows server must have access to the storage account prepare in [previous step](#prepare-azure-blob-storage).
+
+## Configure continuous data protection
+1. Deploy Tiger Bridge solution as described in [standalone hybrid configuration](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide#deploy-standalone-hybrid-configuration) (steps 1 to 4).
+1. Under Tiger Bridge settings, enable **Delete replica when source file is removed** and **Keep replica versions**
+ :::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-settings.png" alt-text="Screenshot that shows how to enable settings for CDP.":::
+1. Set versioning policy either **By Age** or **By Count**
+ :::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-versioning-policy.png" alt-text="Screenshot that shows how to set versioning policy.":::
+
+Tiger Bridge is natively integrated with [Windows Volume Shadow Copy Service](/windows-server/storage/file-server/volume-shadow-copy-service). Integration enables restoring files and folders protected by Tiger Bridge CDP using native windows tools, like Windows Explorer. To verify CDP is enabled, simply change any file and use Windows Explorer Previous Versions to verify a version has been created. You can restore any version listed by selecting it, and pressing **Restore**.
++
+Tiger Bridge CDP also enables restoring files if there was accidental deletions. To undelete a file, Tiger Bridge Shell extension can be used. Simply select the folder where the file was originally located, navigate to Tiger Bridge Shell Extension and select **Undelete**.
+
+
+## Configure archiving
+Tiger Bridge can move a replicated file between Azure Blob Storage tiers to optimize for cost. That process is called archiving, and it replaces a file with an offline file (stub). To configure archiving, perform the following steps.
+
+1. **Replicate data directly to Azure Storage Archive tier** - If you want replicated data to be automatically tiered to Azure Storage Archive tier, a **Default access tier** has to be changed in Tiger Bridge configuration.
+
+ :::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-pair-account.png" alt-text="Screenshot that shows how to pair a storage account with local source.":::
+
+ Change **Default access tier** to **Archive**. You can also select a default **[Rehydration priority](/azure/storage/blobs/archive-rehydrate-to-online-tier)**.
+
+ :::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-change-access-tier.png" alt-text="Screenshot that shows how to change a default access tier in Tiger Bridge Configuration.":::
+
+1. **Configure archiving policy** - Tiger Bridge allows you to specify which files on local source must be moved to archive tier. Policy can configure two parameters:
+ 1. **Minimal file size** - specifies minimum file size for archiving. Only files with sizes larger than defined value will be tiered to Archive tier.
+ 1. **Time interval** - specifies time interval in which the files haven't been accessed. All files that haven't been accessed in at least the defined value, will be moved to Archive tier.
+ :::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-archive-policy.png" alt-text="Screenshot that shows how to change an archiving policy in Tiger Bridge Configuration.":::
+
+Once the files are in Archive tier, they are not directly accessible. To access those files, they have to be rehydrated (moved from Archive tier to Hot or Cool tier). Tiger Bridge Shell Extension can be used to invoke the rehydration process in a simple way. Right-click on the file you want to rehydrate in Windows Explorer, find Tiger Bridge Shell Extension, and select **Rehydrate from Archive**. You'll be notified that **Restoring from archive (Rehydrate)** is an operation that may apply other fees.
+++
+For more detailed information on how to configure Tiger Bridge for your specific set-up, refer to the latest [Tiger Bridge Administration Guide](https://www.tiger-technology.com/software/tiger-bridge/docs/)
+
+## Configure disaster recovery
+Tiger Bridge can be configured in Disaster Recovery mode. Typical configuration is an active - passive configuration with one Tiger Bridge server on the primary and one on the secondary site. Tiger Bridge server on the primary site is active and replicates the data to secondary Tiger Bridge server (through Azure Blob Storage). Tiger Bridge server on the secondary server is idle and receives file changes
++
+1. Deploy and setup Tiger Bridge server on the primary and secondary site as instructed in [Tiger Bridge deployment guide](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide#deploy-standalone-hybrid-configuration) for standalone hybrid configuration
+
+ > [!NOTE]
+ > Both Tiger Bridge servers on primary and secondary site must be connected to the same container and storage account.
+
+1. Enable Tiger Bridge synchronization on both Tiger Bridge servers.
+ :::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-dr-sync-policy.png" alt-text="Screenshot that shows how to enable Tiger Bridge sync policy.":::
+
+ - On secondary Tiger Bridge server, disable **Listen**, and enter the time interval at which servers will check for notifications from active server about changes.
+ - Choose if you want to automatically retrieve the changed files to the secondary server immediately. Selecting **Automatically restore files on the synchronized source** will download all the changes on the secondary server immediately. If unselected, changes will be downloaded on demand.
+ - After the settings are done, click **Apply**
+
+For increased resiliency on the primary site, Tiger Bridge support Windows DFSR that would enable replication between two Tiger Bridge servers on the primary site. If there were issues with one of the Tiger Bridge servers on the primary site, other one would continue to operate.
+
+> [!TIP]
+> Tiger Bridge Policies and Synchronization can be defined as global (applied to all Tiger Bridge servers), or can be defined per Tiger Bridge server.
+
+## Support
+
+### How to open a case with Azure
+
+In the [Azure portal](https://portal.azure.com) search for support in the search bar at the top. Select **Help + support** -> **New Support Request**.
+
+### Engaging Tiger Bridge support
+
+Tiger Technology provides 365x24x7 support for Tiger Bridge. To contact support, [create a support ticket](https://www.tiger-technology.com/contact-support/).
+
+## Next steps
+- [Tiger Bridge website](https://www.tiger-technology.com/software/tiger-bridge/)
+- [Tiger Bridge guides](https://www.tiger-technology.com/software/tiger-bridge/docs/)
+- [Azure Storage partners for primary and secondary storage](./partner-overview.md)
+- [Tiger Bridge Marketplace offering](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)
+- [Running ISV file services in Azure](../primary-secondary-storage/isv-file-services.md)
stream-analytics Cosmos Db Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cosmos-db-managed-identity.md
+
+ Title: Use managed identities to access Cosmos DB from an Azure Stream Analytics job
+description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to an Azure CosmosDB output.
++++ Last updated : 05/04/2022+++
+# Use managed identities to access Cosmos DB from an Azure Stream Analytics job (preview)
+
+Azure Stream Analytics supports managed identity authentication for Azure Cosmos DB output. Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days. When you remove the need to manually authenticate, your Stream Analytics deployments can be fully automated. 
+
+A managed identity is a managed application registered in Azure Active Directory that represents a given Stream Analytics job. The managed application is used to authenticate to a targeted resource. For more information on managed identities for Azure Stream Analytics, see [Managed identities for Azure Stream Analytics](stream-analytics-managed-identities-overview.md).
+
+This article shows you how to enable system-assigned managed identity for a Cosmos DB output of a Stream Analytics job through the Azure portal. Before you can enable system-assigned managed identity, you must first have a Stream Analytics job and an Azure Cosmos DB resource.
+
+## Create a managed identity 
+
+First, you create a managed identity for your Azure Stream Analytics job.ΓÇ»
+
+1. In the Azure portal, open your Azure Stream Analytics job. 
+
+2. From the left navigation menu, select **Managed Identity** located under *Configure*. Then, check the box next to **Use System-assigned Managed Identity** and select **Save**.
+
+ :::image type="content" source="media/event-hubs-managed-identity/system-assigned-managed-identity.png" alt-text="System assigned managed identity":::ΓÇ»
+
+3. A service principal for the Stream Analytics job's identity is created in Azure Active Directory. The life cycle of the newly created identity is managed by Azure. When the Stream Analytics job is deleted, the associated identity (that is, the service principal) is automatically deleted by Azure. 
+
+ When you save the configuration, the Object ID (OID) of the service principal is listed as the Principal ID as shown below:ΓÇ»
+
+ :::image type="content" source="media/event-hubs-managed-identity/principal-id.png" alt-text="Principal ID":::
+
+ The service principal has the same name as the Stream Analytics job. For example, if the name of your job isΓÇ»`MyASAJob`, the name of the service principal is alsoΓÇ»`MyASAJob`.ΓÇ»
+
+## Grant the Stream Analytics job permissions to access the Azure Cosmos DB account
+
+For the Stream Analytics job to access your Cosmos DB using managed identity, the service principal you created must have special permissions to your Azure Cosmos DB account. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you can use the following two roles:
+
+|Built-in role |Description |
+|||
+|[DocumentDB Account Contributor](../role-based-access-control/built-in-roles.md#documentdb-account-contributor)|Can manage Azure Cosmos DB accounts. Allows retrieval of read/write keys. |
+|[Cosmos DB Account Reader Role](../role-based-access-control/built-in-roles.md#cosmos-db-account-reader-role)|Can read Azure Cosmos DB account data. Allows retrieval of read keys. |
+
+> [!TIP]
+> When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Account Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
+
+1. Select **Access control (IAM)**.
+
+2. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+3. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | DocumentDB Account Contributor |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Name of your Stream Analytics job> |
+
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+> [!NOTE]
+> Due to global replication or caching latency, there may be a delay when permissions are revoked or granted. Changes should be reflected within 8 minutes.
++
+### Add the Cosmos DB as an output
+
+Now that your managed identity is configured, you're ready to add the Cosmos DB resource as an output to your Stream Analytics job. 
+
+1. Go to your Stream Analytics job and navigate to the **Outputs** page under **Job Topology**.
+
+1. Select **Add > Cosmos DB**. In the output properties window, search and select your Cosmos DB account and select **Managed Identity: System assigned** from the *Authentication mode* drop-down menu.
+
+1. Fill out the rest of the properties and select **Save**.
+
+## Next steps
+
+* [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md)
+* [Azure Cosmos DB Output](azure-cosmos-db-output.md)
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+* [Cosmos DB Optimization](stream-analytics-documentdb-output.md)
stream-analytics Service Bus Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/service-bus-managed-identity.md
+
+ Title: Use managed identities to access Service Bus from an Azure Stream Analytics job
+description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to an Azure Service Bus output.
++++ Last updated : 05/04/2022+++
+# Use managed identities to access Service Bus from an Azure Stream Analytics job (preview)
+
+Azure Stream Analytics supports managed identity authentication for both Azure Service Bus output. Managed identities for Azure resources is a cross-Azure feature that enables you to create a secure identity associated with the deployment under which your application code runs. You can then associate that identity with access-control roles that grant custom permissions for accessing specific Azure resources that your application needs.
+
+With managed identities, the Azure platform manages this runtime identity. You do not need to store and protect access keys in your application code or configuration, either for the identity itself, or for the resources you need to access. For more information on managed identities for Azure Stream Analytics, see [Managed identities for Azure Stream Analytics](stream-analytics-managed-identities-overview.md).
+
+This article shows you how to enable system-assigned managed identity for a Service Bus output of a Stream Analytics job through the Azure portal. Before you can enable system-assigned managed identity, you must first have a Stream Analytics job and an Azure Service Bus resource.
+
+## Create a managed identity 
+
+First, you create a managed identity for your Azure Stream Analytics job.ΓÇ»
+
+1. In the Azure portal, open your Azure Stream Analytics job. 
+
+2. From the left navigation menu, select **Managed Identity** located under *Configure*. Then, check the box next to **Use System-assigned Managed Identity** and select **Save**.
+
+ :::image type="content" source="media/event-hubs-managed-identity/system-assigned-managed-identity.png" alt-text="System assigned managed identity":::ΓÇ»
+
+3. A service principal for the Stream Analytics job's identity is created in Azure Active Directory. The life cycle of the newly created identity is managed by Azure. When the Stream Analytics job is deleted, the associated identity (that is, the service principal) is automatically deleted by Azure. 
+
+ When you save the configuration, the Object ID (OID) of the service principal is listed as the Principal ID as shown below:ΓÇ»
+
+ :::image type="content" source="media/event-hubs-managed-identity/principal-id.png" alt-text="Principal ID":::
+
+ The service principal has the same name as the Stream Analytics job. For example, if the name of your job isΓÇ»`MyASAJob`, the name of the service principal is alsoΓÇ»`MyASAJob`.ΓÇ»
+
+## Grant the Stream Analytics job permissions to access Azure Service Bus
+
+For the Stream Analytics job to access your Service Bus using managed identity, the service principal you created must have special permissions to your Azure Service Bus resource. In this step, you can assign a role to your stream analytics job's system-assigned managed identity. Azure provides the below Azure built-in roles for authorizing access to a Service Bus namespace:
+
+- [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner): Enables data access to Service Bus namespace and its entities (queues, topics, subscriptions, and filters)
+- [Azure Service Bus Data Sender](../role-based-access-control/built-in-roles.md#azure-service-bus-data-sender): Use this role to give send access to Service Bus namespace and its entities.
+- [Azure Service Bus Data Receiver](../role-based-access-control/built-in-roles.md#azure-service-bus-data-receiver): Use this role to give receiving access to Service Bus namespace and its entities.
+
+Please note that Stream Analytics Jobs do not need nor do they use [Azure Service Bus Data Receiver](../role-based-access-control/built-in-roles.md#azure-service-bus-data-receiver).
+
+> [!TIP]
+> When you assign roles, assign only the needed access. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
+
+1. Select **Access control (IAM)**.
+
+2. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+3. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Azure Service Bus Data Owner or Azure Service Bus Data Sender |
+ | Assign access to | User, group, or service principal |
+ | Members | \<Name of your Stream Analytics job> |
+
+ ![Screenshot that shows Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+> [!NOTE]
+> Due to global replication or caching latency, there may be a delay when permissions are revoked or granted. Changes should be reflected within 8 minutes.
++
+### Add the Service Bus as an output
+
+Now that your managed identity is configured, you're ready to add the Service Bus resource as an output to your Stream Analytics job. 
+
+1. Go to your Stream Analytics job and navigate to the **Outputs** page under **Job Topology**.
+
+1. Select **Add > Service Bus queue or Service Bus topic**. In the output properties window, search and select your Cosmos DB account and select **Managed Identity: System assigned** from the *Authentication mode* drop-down menu.
+
+1. Fill out the rest of the properties and select **Save**.
+
+## Next steps
+
+* [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md)
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
This article contains information about how to troubleshoot most frequent proble
## Synapse Studio
-Synapse studio is easy to use tool that enables you to access your data using a browser without a need to install database access tools. However, Synapse studio is not designed to read a large set of data or full management of SQL objects.
+Synapse studio is easy to use tool that enables you to access your data using a browser without a need to install database access tools. However, Synapse studio isn't designed to read a large set of data or full management of SQL objects.
### Serverless SQL pool is grayed out in Synapse Studio
If the issue still continues, create a [support ticket](../../azure-portal/suppo
### Serverless databases are not shown in Synapse studio
-If you do not see the databases that are created in serverless SQL pool, check is your serverless SQL pool started. If the serverless SQL pool is deactivated, the databases will not be shown. Execute any query (for example `SELECT 1`) on the serverless pool to activate it, and the databases will be shown.
+If you don't see the databases that are created in serverless SQL pool, check is your serverless SQL pool started. If the serverless SQL pool is deactivated, the databases won't be shown. Execute any query (for example `SELECT 1`) on the serverless pool to activate it, and the databases will be shown.
### Synapse Serverless SQL pool is showing as unavailable Wrong network configuration is often the cause for this behavior. Make sure the ports are appropriately configured. In case you use firewall or Private Endpoint check their settings as well. Finally, make sure the appropriate roles are granted. ## Storage access
-If you are getting the errors while trying to access the files on storage, make sure that you have permissions to access data. You should be able to access publicly available files. If you are accessing data without credentials, make sure that your Azure AD identity can directly access the files.
+If you're getting the errors while trying to access the files on storage, make sure that you have permissions to access data. You should be able to access publicly available files. If you're accessing data without credentials, make sure that your Azure AD identity can directly access the files.
If you have SAS key that you should use to access files, make sure that you created a credential ([server-level](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential) or [database-scoped](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential)) that contains that credential. The credentials are required if you need to access data using the workspace [managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) and custom [service principal name](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential). ### Cannot read, list or access files on data lake storage
-If you are using Azure AD login without explicit credential, make sure that your Azure AD identity can access the files on storage. Your Azure AD identity need to have Blob Data Reader or list/read ACL permissions to access the files - see [Query fails because file cannot be opened](#query-fails-because-file-cannot-be-opened).
+If you're using Azure AD login without explicit credential, make sure that your Azure AD identity can access the files on storage. Your Azure AD identity need to have Blob Data Reader or list/read ACL permissions to access the files - see [Query fails because file cannot be opened](#query-fails-because-file-cannot-be-opened).
-If you are accessing storage using [credentials](develop-storage-files-storage-access-control.md#credentials), make sure that your [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) or [SPN](develop-storage-files-storage-access-control.md?tabs=service-principal) has Data Reader/Contributor role, or ACL permissions. If you have used [SAS token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) make sure that it has `rl` permission and that it hasn't expired.
+If you're accessing storage using [credentials](develop-storage-files-storage-access-control.md#credentials), make sure that your [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) or [SPN](develop-storage-files-storage-access-control.md?tabs=service-principal) has Data Reader/Contributor role, or ACL permissions. If you have used [SAS token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) make sure that it has `rl` permission and that it hasn't expired.
If you are using SQL login and the `OPENROWSET` function [without data source](develop-storage-files-overview.md#query-files-using-openrowset), make sure that you have a server-level credential that matches the storage URI and has permission to access the storage. ### Query fails because file cannot be opened
The following error is returned when a serverless SQL pool cannot read the Delta
Content of directory on path 'https://.....core.windows.net/.../_delta_log/*.json' cannot be listed. ```
-Make sure that `_delta_log` folder exists (maybe you are querying plain Parquet files that are not converted to Delta Lake format). If the `_delta_log` folder exists, make sure that you have both read and list permission on the underlying Delta Lake folders. Try to read \*.json files directly using FORMAT='CSV' (put your URI in the BULK parameter):
+Make sure that `_delta_log` folder exists (maybe you're querying plain Parquet files that aren't converted to Delta Lake format). If the `_delta_log` folder exists, make sure that you have both read and list permission on the underlying Delta Lake folders. Try to read \*.json files directly using FORMAT='CSV' (put your URI in the BULK parameter):
```sql select top 10 *
from openrowset(BULK 'https://.....core.windows.net/.../_delta_log/*.json',FORMA
with (line varchar(max)) as logs ```
-If this query fails, the caller does not have permission to read the underlying storage files.  
+If this query fails, the caller doesn't have permission to read the underlying storage files.  
## Query execution
-You might get errors during the query execution if the caller [cannot access some objects](develop-storage-files-overview.md#permissions), query [cannot access external data](develop-storage-files-storage-access-control.md#storage-permissions), or you are using some functionalities that are [not supported in serverless SQL pools](overview-features.md).
+You might get the errors during the query execution in the following cases:
+- The caller [cannot access some objects](develop-storage-files-overview.md#permissions),
+- The query [cannot access external data](develop-storage-files-storage-access-control.md#storage-permissions),
+- The query contains some functionalities that are [not supported in serverless SQL pools](overview-features.md).
### Query fails because it cannot be executed due to current resource constraints
The error *Invalid object name 'table name'* indicates that you are using an obj
### Unclosed quotation mark after the character string
-In some rare cases, where you are using `LIKE` operator on a string column or some comparison with the string literals, you might get the following error:
+In some rare cases, where you're using `LIKE` operator on a string column or some comparison with the string literals, you might get the following error:
``` Msg 105, Level 15, State 1, Line 88 Unclosed quotation mark after the character string ```
-This error might happen if you are using `Latin1_General_100_BIN2_UTF8` collation on the column. Try to set `Latin1_General_100_CI_AS_SC_UTF8` collation on the column instead of the `Latin1_General_100_BIN2_UTF8` collation to resolve the issue. If the error is still returned, raise a support request through the Azure portal.
+This error might happen if you're using `Latin1_General_100_BIN2_UTF8` collation on the column. Try to set `Latin1_General_100_CI_AS_SC_UTF8` collation on the column instead of the `Latin1_General_100_BIN2_UTF8` collation to resolve the issue. If the error is still returned, raise a support request through the Azure portal.
### Could not allocate tempdb space while transferring data from one distribution to another
If you would like to query the file ΓÇÿnames.csvΓÇÖ with this query 1, Azure Syn
names.csv ```csv Id,first name,
-1,Adam
+1, Adam
2,Bob 3,Charles 4,David
spark.conf.set("spark.sql.legacy.parquet.int96RebaseModeInWrite", "CORRECTED")
This error might indicate that some internal process issue happened in the serverless SQL pool. File a support ticket with all necessary details that could help Azure support team to investigate the issue.
-Please specify in the support requests anything that might be unusual compared to the regular workload, such as large number of concurrent requests or some special workload or query that started executing before this error happened.
+Describe in the support requests anything that might be unusual compared to the regular workload, such as large number of concurrent requests or some special workload or query that started executing before this error happened.
## Configuration
If your query fails with the error message '*Please create a master key in the d
Most likely, you just created a new user database and did not create a master key yet.
-To resolve this, create a master key with the following query:
+To resolve this problem, create a master key with the following query:
```sql CREATE MASTER KEY [ ENCRYPTION BY PASSWORD ='password' ];
There are some limitations and known issues that you might see in Delta Lake sup
- Root folder must have a sub-folder named `_delta_log`. The query will fail if there is no `_delta_log` folder. If you don't see that folder, then you are referencing plain Parquet files that must be [converted to Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#convert-parquet-to-delta) using Apache Spark pools. - Do not specify wildcards to describe the partition schema. Delta Lake query will automatically identify the Delta Lake partitions. - Delta Lake tables created in the Apache Spark pools are not automatically available in serverless SQL pool. To query such Delta Lake tables using T-SQL language, run the [CREATE EXTERNAL TABLE](./create-use-external-tables.md#delta-lake-external-table) statement and specify Delta as format.-- External tables do not support partitioning. Use [partitioned views](create-use-views.md#delta-lake-partitioned-views) on Delta Lake folder to leverage the partition elimination. See known issues and workarounds later in the article.
+- External tables do not support partitioning. Use [partitioned views](create-use-views.md#delta-lake-partitioned-views) on Delta Lake folder to use the partition elimination. See known issues and workarounds later in the article.
- Serverless SQL pools do not support time travel queries. You can vote for this feature on [Azure feedback site](https://feedback.azure.com/d365community/ide?pivots=programming-language-python#read-older-versions-of-data-using-time-travel). - Serverless SQL pools do not support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Azure Synapse Analytics [to update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data).-- Serverless SQL pools in Azure Synapse Analytics do not support datasets with the [BLOOM filter](/azure/databricks/delta/optimizations/bloom-filters).
+ - You cannot [store query results to storage in Delta Lake format](create-external-table-as-select.md) using the Create external table as select (CETAS) command. The CETAS command supports only Parquet and CSV as an output formats.
+- Serverless SQL pools in Azure Synapse Analytics do not support the datasets with the [BLOOM filter](/azure/databricks/delta/optimizations/bloom-filters). The serverless SQL pool will ignore the BLOOM filters.
- Delta Lake support is not available in dedicated SQL pools. Make sure that you are using serverless pools to query Delta Lake files. ### JSON text is not properly formatted
Make sure that your Delta Lake data set is not corrupted. Verify that you can re
**Workaround** - try to create a checkpoint on Delta Lake data set using Apache Spark pool and re-run the query. The checkpoint will aggregate transactional json log files and might solve the issue.
-If the data set is valid, [create a support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md#create-a-support-request) and provide an additional info:
-- Do not make any changes like adding/removing the columns or optimizing the table because this might change the state of Delta Lake transaction log files.
+If the data set is valid, [create a support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md#create-a-support-request) and provide more info:
+- Do not make any changes like adding/removing the columns or optimizing the table because this operation might change the state of Delta Lake transaction log files.
- Copy the content of `_delta_log` folder into a new empty folder. **DO NOT** copy `.parquet data` files. - Try to read the content that you copied in new folder and verify that you are getting the same error. - Send the content of the copied `_delta_log` file to Azure support.
-Now you can continue using Delta Lake folder with Spark pool. You will provide copied data to Microsoft support if you are allowed to share this. Azure team will investigate the content of the `delta_log` file and provide more info about the possible errors and the workarounds.
+Now you can continue using Delta Lake folder with Spark pool. You will provide copied data to Microsoft support if you are allowed to share this information. Azure team will investigate the content of the `delta_log` file and provide more info about the possible errors and the workarounds.
## Performance
The serverless SQL pool assigns the resources to the queries based on the size o
### Query duration is very long
-If you have queries with the query duration longer than 30min, this indicates that returning results to the client is slow. Serverless SQL pool has 30min limit for execution, and any additional time is spent on result streaming. Try with
-- If you are using [Synapse studio](#query-is-slow-when-executed-using-synapse-studio) try to reproduce the issues with some other application like SQL Server Management Studio or Azure Data Studio.
+If you have queries with the query duration longer than 30 min, the query is slowly returning results to the client is slow. Serverless SQL pool has 30 min limit for execution, and any additional time is spent on result streaming. Try with the following workarounds:
+- If you are using [Synapse studio](#query-is-slow-when-executed-using-synapse-studio), try to reproduce the issues with some other application like SQL Server Management Studio or Azure Data Studio.
- If your query is slow when executed using [SSMS, ADS, Power BI, or some other application](#query-is-slow-when-executed-using-application) check networking issues and best practices.
+- Put the query in the CETAS command and measure the query duration. The CETAS command will store the results to Azure Data Lake Storage and will not depend on the client connection. If the CETAS command finishes faster than the original query, check the network bandwidth between the client and the serverless SQL pool.
#### Query is slow when executed using Synapse studio
See the best practices for [collocating the resources](best-practices-serverless
### High variations in query durations If you are executing the same query and observing variations in the query durations, there might be several reasons that can cause this behavior: -- Check is this a first execution of a query. The first execution of a query collects the statistics required to create a plan. The statistics are collected by scanning the underlying files and might increase the query duration. In Synapse studio you will see additional ΓÇ£global statistics creationΓÇ¥ queries in the SQL request list, that are executed before your query.
+- Check is this a first execution of a query. The first execution of a query collects the statistics required to create a plan. The statistics are collected by scanning the underlying files and might increase the query duration. In the Synapse studio, you will see the ΓÇ£global statistics creationΓÇ¥ queries in the SQL request list, that are executed before your query.
- Statistics might expire after some time, so periodically you might observe an impact on performance because the serverless pool must scan and re-built the statistics. You might notice additional ΓÇ£global statistics creationΓÇ¥ queries in the SQL request list, that are executed before your query.-- Check is there some additional workload that is running on the same endpoint when you executed the query with the longer duration. The serverless SQL endpoint will equally allocate the resources to all queries that are executed in parallel, and the query might be delayed.
+- Check is there some workload that is running on the same endpoint when you executed the query with the longer duration. The serverless SQL endpoint will equally allocate the resources to all queries that are executed in parallel, and the query might be delayed.
## Connections
Serverless SQL pool enables you to connect using TDS protocol and use T-SQL lang
### SQL pool is warming up Following a longer period of inactivity Serverless SQL pool will be deactivated. The activation will happen automatically on the first next activity, such as the first connection attempt. Activation process might take a bit longer than a single connection attempt interval, thus the error message is displayed. Re-trying the connection attempt should be enough.
-As a best practice, for the clients which support it, use ConnectionRetryCount and ConnectRetryInterval connection string keywords to control the reconnect behavior.
+As a best practice, for the clients that support it, use ConnectionRetryCount and ConnectRetryInterval connection string keywords to control the reconnect behavior.
If the error message persists, file a support ticket through the Azure portal.
If you want to create role assignment for Service Principal Identifier/Azure AD
``` Login error: Login failed for user '<token-identified principal>'. ```
-For service principals login should be created with Application ID as SID (not with Object ID). There is a known limitation for service principals which is preventing the Azure Synapse service from fetching Application ID from Microsoft Graph when creating role assignment for another SPI/app.
+For service principals login should be created with Application ID as SID (not with Object ID). There is a known limitation for service principals, which is preventing the Azure Synapse service from fetching Application ID from Microsoft Graph when creating role assignment for another SPI/app.
**Solution #1**
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
The Azure virtual machines you create for Azure Virtual Desktop must have access
|*xt.table.core.windows.net|443|Agent traffic (optional)|AzureCloud| |*xt.queue.core.windows.net|443|Agent traffic (optional)|AzureCloud|
+A [Service Tag](https://docs.microsoft.com/azure/virtual-network/service-tags-overview) represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. Service Tags can be used in both Network Security Group ([NSG](https://docs.microsoft.com/azure/virtual-network/network-security-groups-overview)) and [Azure Firewall](https://docs.microsoft.com/azure/firewall/service-tags) rules to restrict outbound network access. Service Tags can be also used in User Defined Route ([UDR](https://docs.microsoft.com/azure/virtual-network/virtual-networks-udr-overview#user-defined)) to customize traffic routing behavior.
+ >[!IMPORTANT] >Azure Virtual Desktop supports the FQDN tag. For more information, see [Use Azure Firewall to protect Azure Virtual Desktop deployments](../firewall/protect-azure-virtual-desktop.md). >
virtual-desktop Security Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/security-guide.md
By restricting operating system capabilities, you can strengthen the security of
Trusted launch are Gen2 Azure VMs with enhanced security features aimed to protect against ΓÇ£bottom of the stackΓÇ¥ threats through attack vectors such as rootkits, boot kits, and kernel-level malware. The following are the enhanced security features of trusted launch, all of which are supported in Azure Virtual Desktop. To learn more about trusted launch, visit [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
+>[!NOTE]
+>Bring your own custom image for Trusted Launch VM is not yet supported. When deploying an AVD Host Pool, you will be limited to the list of pre-canned OS images listed in the dropdown "Image" combo box.
+ ### Secure Boot Secure Boot is a mode that platform firmware supports that protects your firmware from malware-based rootkits and boot kits. This mode only allows signed OSes and drivers to start up the machine.
virtual-machine-scale-sets Virtual Machine Scale Sets Instance Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-protection.md
- Title: Instance Protection for Azure virtual machine scale set instances
-description: Learn how to protect Azure virtual machine scale set instances from scale-in and scale-set operations.
----- Previously updated : 02/26/2020----
-# Instance Protection for Azure virtual machine scale set instances
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-Azure virtual machine scale sets enable better elasticity for your workloads through [Autoscale](virtual-machine-scale-sets-autoscale-overview.md), so you can configure when your infrastructure scales-out and when it scales-in. Scale sets also enable you to centrally manage, configure, and update a large number of VMs through different [upgrade policy](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) settings. You can configure an update on the scale set model and the new configuration is applied automatically to every scale set instance if you've set the upgrade policy to Automatic or Rolling.
-
-As your application processes traffic, there can be situations where you want specific instances to be treated differently from the rest of the scale set instance. For example, certain instances in the scale set could be performing long-running operations, and you don't want these instances to be scaled-in until the operations complete. You might also have specialized a few instances in the scale set to perform additional or different tasks than the other members of the scale set. You require these 'special' VMs not to be modified with the other instances in the scale set. Instance protection provides the additional controls to enable these and other scenarios for your application.
-
-This article describes how you can apply and use the different instance protection capabilities with scale set instances.
-
-## Types of instance protection
-Scale sets provide two types of instance protection capabilities:
--- **Protect from scale-in**
- - Enabled through **protectFromScaleIn** property on the scale set instance
- - Protects instance from Autoscale initiated scale-in
- - User-initiated instance operations (including instance delete) are **not blocked**
- - Operations initiated on the scale set (upgrade, reimage, deallocate, etc.) are **not blocked**
--- **Protect from scale set actions**
- - Enabled through **protectFromScaleSetActions** property on the scale set instance
- - Protects instance from Autoscale initiated scale-in
- - Protects instance from operations initiated on the scale set (such as upgrade, reimage, deallocate, etc.)
- - User-initiated instance operations (including instance delete) are **not blocked**
- - Delete of the full scale set is **not blocked**
-
-## Protect from scale-in
-Instance protection can be applied to scale set instances after the instances are created. Protection is applied and modified only on the [instance model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-vm-model-view) and not on the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model).
-
-There are multiple ways of applying scale-in protection on your scale set instances as detailed in the examples below.
-
-### Azure portal
-
-You can apply scale-in protection through the Azure portal to an instance in the scale set. You cannot adjust more than one instance at a time. Repeat the steps for each instance you want to protect.
-
-1. Go to an existing virtual machine scale set.
-1. Select **Instances** from the menu on the left, under **Settings**.
-1. Select the name of the instance you want to protect.
-1. Select the **Protection Policy** tab.
-1. In the **Protection Policy** blade, select the **Protect from scale-in** option.
-1. Select **Save**.
-
-### REST API
-
-The following example applies scale-in protection to an instance in the scale set.
-
-```
-PUT on `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualMachines/{instance-id}?api-version=2019-03-01`
-```
-
-```json
-{
- "properties": {
- "protectionPolicy": {
- "protectFromScaleIn": true
- }
- }
-}
-
-```
-
-> [!NOTE]
->Instance protection is only supported with API version 2019-03-01 and above
-
-### Azure PowerShell
-
-Use the [Update-AzVmssVM](/powershell/module/az.compute/update-azvmssvm) cmdlet to apply scale-in protection to your scale set instance.
-
-The following example applies scale-in protection to an instance in the scale set having instance ID 0.
-
-```azurepowershell-interactive
-Update-AzVmssVM `
- -ResourceGroupName "myResourceGroup" `
- -VMScaleSetName "myVMScaleSet" `
- -InstanceId 0 `
- -ProtectFromScaleIn $true
-```
-
-### Azure CLI 2.0
-
-Use [az vmss update](/cli/azure/vmss#az-vmss-update) to apply scale-in protection to your scale set instance.
-
-The following example applies scale-in protection to an instance in the scale set having instance ID 0.
-
-```azurecli-interactive
-az vmss update \
- --resource-group <myResourceGroup> \
- --name <myVMScaleSet> \
- --instance-id 0 \
- --protect-from-scale-in true
-```
-
-## Protect from scale set actions
-Instance protection can be applied to scale set instances after the instances are created. Protection is applied and modified only on the [instance model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-vm-model-view) and not on the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model).
-
-Protecting an instance from scale set actions also protects the instance from Autoscale initiated scale-in.
-
-There are multiple ways of applying scale set actions protection on your scale set instances as detailed in the examples below.
-
-### Azure portal
-
-You can apply protection from scale set actions through the Azure portal to an instance in the scale set. You cannot adjust more than one instance at a time. Repeat the steps for each instance you want to protect.
-
-1. Go to an existing virtual machine scale set.
-1. Select **Instances** from the menu on the left, under **Settings**.
-1. Select the name of the instance you want to protect.
-1. Select the **Protection Policy** tab.
-1. In the **Protection Policy** blade, select the **Protect from scale set actions** option.
-1. Select **Save**.
-
-### REST API
-
-The following example applies protection from scale set actions to an instance in the scale set.
-
-```
-PUT on `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vMScaleSetName}/virtualMachines/{instance-id}?api-version=2019-03-01`
-```
-
-```json
-{
- "properties": {
- "protectionPolicy": {
- "protectFromScaleIn": true,
- "protectFromScaleSetActions": true
- }
- }
-}
-
-```
-
-> [!NOTE]
->Instance protection is only supported with API version 2019-03-01 and above.</br>
-Protecting an instance from scale set actions also protects the instance from Autoscale initiated scale-in. You can't specify "protectFromScaleIn": false when setting "protectFromScaleSetActions": true
-
-### Azure PowerShell
-
-Use the [Update-AzVmssVM](/powershell/module/az.compute/update-azvmssvm) cmdlet to apply protection from scale set actions to your scale set instance.
-
-The following example applies protection from scale set actions to an instance in the scale set having instance ID 0.
-
-```azurepowershell-interactive
-Update-AzVmssVM `
- -ResourceGroupName "myResourceGroup" `
- -VMScaleSetName "myVMScaleSet" `
- -InstanceId 0 `
- -ProtectFromScaleIn $true `
- -ProtectFromScaleSetAction $true
-```
-
-### Azure CLI 2.0
-
-Use [az vmss update](/cli/azure/vmss#az-vmss-update) to apply protection from scale set actions to your scale set instance.
-
-The following example applies protection from scale set actions to an instance in the scale set having instance ID 0.
-
-```azurecli-interactive
-az vmss update \
- --resource-group <myResourceGroup> \
- --name <myVMScaleSet> \
- --instance-id 0 \
- --protect-from-scale-in true \
- --protect-from-scale-set-actions true
-```
-
-## Troubleshoot
-### No protectionPolicy on scale set model
-Instance protection is only applicable on scale set instances and not on the scale set model.
-
-### No protectionPolicy on scale set instance model
-By default, protection policy is not applied to an instance when it is created.
-
-You can apply instance protection to scale set instances after the instances are created.
-
-### Not able to apply instance protection
-Instance protection is only supported with API version 2019-03-01 and above. Check the API version being used and update as required. You might also need to update your PowerShell or CLI to the latest version.
-
-## Next steps
-Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on virtual machine scale sets.
+
+ Title: Instance Protection for Azure virtual machine scale set instances
+description: Learn how to protect Azure virtual machine scale set instances from scale-in and scale-set operations.
+++++ Last updated : 02/26/2020++++
+# Instance Protection for Azure virtual machine scale set instances
+**Applies to:** :heavy_check_mark: Uniform scale sets
+
+Azure virtual machine scale sets enable better elasticity for your workloads through [Autoscale](virtual-machine-scale-sets-autoscale-overview.md), so you can configure when your infrastructure scales-out and when it scales-in. Scale sets also enable you to centrally manage, configure, and update a large number of VMs through different [upgrade policy](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) settings. You can configure an update on the scale set model and the new configuration is applied automatically to every scale set instance if you've set the upgrade policy to Automatic or Rolling.
+
+As your application processes traffic, there can be situations where you want specific instances to be treated differently from the rest of the scale set instance. For example, certain instances in the scale set could be performing long-running operations, and you don't want these instances to be scaled-in until the operations complete. You might also have specialized a few instances in the scale set to perform additional or different tasks than the other members of the scale set. You require these 'special' VMs not to be modified with the other instances in the scale set. Instance protection provides the additional controls to enable these and other scenarios for your application.
+
+This article describes how you can apply and use the different instance protection capabilities with scale set instances.
+
+## Types of instance protection
+Scale sets provide two types of instance protection capabilities:
+
+- **Protect from scale-in**
+ - Enabled through **protectFromScaleIn** property on the scale set instance
+ - Protects instance from Autoscale initiated scale-in
+ - User-initiated instance operations (including instance delete) are **not blocked**
+ - Operations initiated on the scale set (upgrade, reimage, deallocate, etc.) are **not blocked**
+
+- **Protect from scale set actions**
+ - Enabled through **protectFromScaleSetActions** property on the scale set instance
+ - Protects instance from Autoscale initiated scale-in
+ - Protects instance from operations initiated on the scale set (such as upgrade, reimage, deallocate, etc.)
+ - User-initiated instance operations (including instance delete) are **not blocked**
+ - Delete of the full scale set is **not blocked**
+
+## Protect from scale-in
+Instance protection can be applied to scale set instances after the instances are created. Protection is applied and modified only on the [instance model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-vm-model-view) and not on the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model).
+
+There are multiple ways of applying scale-in protection on your scale set instances as detailed in the examples below.
+
+### Azure portal
+
+You can apply scale-in protection through the Azure portal to an instance in the scale set. You cannot adjust more than one instance at a time. Repeat the steps for each instance you want to protect.
+
+1. Go to an existing virtual machine scale set.
+1. Select **Instances** from the menu on the left, under **Settings**.
+1. Select the name of the instance you want to protect.
+1. Select the **Protection Policy** tab.
+1. In the **Protection Policy** blade, select the **Protect from scale-in** option.
+1. Select **Save**.
+
+### REST API
+
+The following example applies scale-in protection to an instance in the scale set.
+
+```
+PUT on `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualMachines/{instance-id}?api-version=2019-03-01`
+```
+
+```json
+{
+ "properties": {
+ "protectionPolicy": {
+ "protectFromScaleIn": true
+ }
+ }
+}
+
+```
+
+> [!NOTE]
+>Instance protection is only supported with API version 2019-03-01 and above
+
+### Azure PowerShell
+
+Use the [Update-AzVmssVM](/powershell/module/az.compute/update-azvmssvm) cmdlet to apply scale-in protection to your scale set instance.
+
+The following example applies scale-in protection to an instance in the scale set having instance ID 0.
+
+```azurepowershell-interactive
+Update-AzVmssVM `
+ -ResourceGroupName "myResourceGroup" `
+ -VMScaleSetName "myVMScaleSet" `
+ -InstanceId 0 `
+ -ProtectFromScaleIn $true
+```
+
+### Azure CLI 2.0
+
+Use [az vmss update](/cli/azure/vmss#az-vmss-update) to apply scale-in protection to your scale set instance.
+
+The following example applies scale-in protection to an instance in the scale set having instance ID 0.
+
+```azurecli-interactive
+az vmss update \
+ --resource-group <myResourceGroup> \
+ --name <myVMScaleSet> \
+ --instance-id 0 \
+ --protect-from-scale-in true
+```
+
+## Protect from scale set actions
+Instance protection can be applied to scale set instances after the instances are created. Protection is applied and modified only on the [instance model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-vm-model-view) and not on the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model).
+
+Protecting an instance from scale set actions also protects the instance from Autoscale initiated scale-in.
+
+There are multiple ways of applying scale set actions protection on your scale set instances as detailed in the examples below.
+
+### Azure portal
+
+You can apply protection from scale set actions through the Azure portal to an instance in the scale set. You cannot adjust more than one instance at a time. Repeat the steps for each instance you want to protect.
+
+1. Go to an existing virtual machine scale set.
+1. Select **Instances** from the menu on the left, under **Settings**.
+1. Select the name of the instance you want to protect.
+1. Select the **Protection Policy** tab.
+1. In the **Protection Policy** blade, select the **Protect from scale set actions** option.
+1. Select **Save**.
+
+### REST API
+
+The following example applies protection from scale set actions to an instance in the scale set.
+
+```
+PUT on `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vMScaleSetName}/virtualMachines/{instance-id}?api-version=2019-03-01`
+```
+
+```json
+{
+ "properties": {
+ "protectionPolicy": {
+ "protectFromScaleIn": true,
+ "protectFromScaleSetActions": true
+ }
+ }
+}
+
+```
+
+> [!NOTE]
+>Instance protection is only supported with API version 2019-03-01 and above.</br>
+Protecting an instance from scale set actions also protects the instance from Autoscale initiated scale-in. You can't specify "protectFromScaleIn": false when setting "protectFromScaleSetActions": true
+
+### Azure PowerShell
+
+Use the [Update-AzVmssVM](/powershell/module/az.compute/update-azvmssvm) cmdlet to apply protection from scale set actions to your scale set instance.
+
+The following example applies protection from scale set actions to an instance in the scale set having instance ID 0.
+
+```azurepowershell-interactive
+Update-AzVmssVM `
+ -ResourceGroupName "myResourceGroup" `
+ -VMScaleSetName "myVMScaleSet" `
+ -InstanceId 0 `
+ -ProtectFromScaleIn $true `
+ -ProtectFromScaleSetAction $true
+```
+
+### Azure CLI 2.0
+
+Use [az vmss update](/cli/azure/vmss#az-vmss-update) to apply protection from scale set actions to your scale set instance.
+
+The following example applies protection from scale set actions to an instance in the scale set having instance ID 0.
+
+```azurecli-interactive
+az vmss update \
+ --resource-group <myResourceGroup> \
+ --name <myVMScaleSet> \
+ --instance-id 0 \
+ --protect-from-scale-in true \
+ --protect-from-scale-set-actions true
+```
+
+## Troubleshoot
+### No protectionPolicy on scale set model
+Instance protection is only applicable on scale set instances and not on the scale set model.
+
+### No protectionPolicy on scale set instance model
+By default, protection policy is not applied to an instance when it is created.
+
+You can apply instance protection to scale set instances after the instances are created.
+
+### Not able to apply instance protection
+Instance protection is only supported with API version 2019-03-01 and above. Check the API version being used and update as required. You might also need to update your PowerShell or CLI to the latest version.
+
+## Next steps
+Learn how to [deploy your application](virtual-machine-scale-sets-deploy-app.md) on virtual machine scale sets.
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
As a new rollout is triggered every month, a VM will receive at least one patch
| Canonical | 0001-com-ubuntu-server-focal | 20_04-lts | | Canonical | 0001-com-ubuntu-pro-focal | pro-20_04-lts | | Redhat | RHEL | 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7_9, 7-RAW, 7-LVM |
-| Redhat | RHEL | 8, 8.1, 8.2, 8_3, 8_4, 8-LVM |
+| Redhat | RHEL | 8, 8.1, 8.2, 8_3, 8_4, 8_5, 8-LVM |
| Redhat | RHEL-RAW | 8-raw | | OpenLogic | CentOS | 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7_8, 7_9, 7-LVM | | OpenLogic | CentOS | 8.0, 8_1, 8_2, 8_3, 8-lvm |
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dav4-dasv4-series.md
Dasv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
[VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
-[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max burst cached and temp storage throughput: IOPS / MBps<sup>1</sup> | Max uncached disk throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) |
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
Key differences between persistent and ephemeral OS disks:
| **Disk type support**| Managed and unmanaged OS disk| Managed OS disk only| | **Region support**| All regions| All regions| | **Data persistence**| OS disk data written to OS disk are stored in Azure Storage| Data written to OS disk is stored on local VM storage and isn't persisted to Azure Storage. |
-| **Stop-deallocated state**| VMs and scale set instances can be stop-deallocated and restarted from the stop-deallocated state | VMs and scale set instances cannot be stop-deallocated|
+| **Stop-deallocated state**| VMs and scale set instances can be stop-deallocated and restarted from the stop-deallocated state | Not Supported |
| **Specialized OS disk support** | Yes| No| | **OS disk resize**| Supported during VM creation and after VM is stop-deallocated| Supported during VM creation only| | **Resizing to a new VM size**| OS disk data is preserved| Data on the OS disk is deleted, OS is reprovisioned | | **Redeploy** | OS disk data is preserved | Data on the OS disk is deleted, OS is reprovisioned |
-| **Stop/ Start of VM** | OS disk data is preserved | VMs and scale set instances cannot be stopped. |
+| **Stop/ Start of VM** | OS disk data is preserved | Not Supported |
| **Page file placement**| For Windows, page file is stored on the resource disk| For Windows, page file is stored on the OS disk (for both OS cache placement and Temp disk placement).|
virtual-machines Azure Hybrid Benefit Byos Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-byos-linux.md
To start using the benefit for SUSE:
| License Type | Software Updates | Allowed VMs| ||||
- | SLES_STANDARD | Installs SLES standard repositories into your virtual machine. | SLES BYOS VMs, SLES custom on-prem image VMs|
+ | SLES | Installs SLES standard repositories into your virtual machine. | SLES BYOS VMs, SLES custom on-prem image VMs|
| SLES_SAP | Installs SLES SAP repositories into your virtual machine. | SLES SAP BYOS VMs, SLES custom on-prem image VMs| | SLES_HPC | Installs SLES High Performance Compute related repositories into your virtual machine. | SLES HPC BYOS VMs, SLES custom on-prem image VMs|
you can use the `az vm update` command to update existing license type on runnin
## Enable and disable the benefit for SLES You can install the `AHBForSLES` extension to install the extension. After successfully installing the extension,
-you can use the `az vm update` command to update existing license type on running VMs. For SLES VMs, run the command and set `--license-type` parameter to one of the following: `SLES_STANDARD`, `SLES_SAP` or `SLES_HPC`.
+you can use the `az vm update` command to update existing license type on running VMs. For SLES VMs, run the command and set `--license-type` parameter to one of the following: `SLES`, `SLES_SAP` or `SLES_HPC`.
### CLI example to enable the benefit for SLES 1. Install the Azure Hybrid Benefit extension on running VM using the portal or via Azure CLI using the command below: ```azurecli
- az vm extension set -n AHBForSLES --publisher publisherName --vm-name myVMName --resource-group myResourceGroup
+ az vm extension set -n AHBForSLES --publisher SUSE.AzureHybridBenefit --vm-name myVMName --resource-group myResourceGroup
``` 1. Once, the extension is installed successfully, change the license type based on your requirements: ```azurecli # This will enable the benefit to fetch software updates for SLES STANDARD repositories
- az vm update -g myResourceGroup -n myVmName --license-type SLES_STANDARD
+ az vm update -g myResourceGroup -n myVmName --license-type SLES
# This will enable the benefit to fetch software updates for SLES SAP repositories az vm update -g myResourceGroup -n myVmName --license-type SLES_SAP
To check the status of Azure Hybrid Benefit for BYOS VM status
1. You can view the Azure Hybrid Benefit status of a VM by using the Azure CLI or by using Azure Instance Metadata Service. You can use the below command for this purpose. Look for a `licenseType` field in the response. If the `licenseType` field exists and the value is one of the below, your VM has the benefit enabled:
- `RHEL_BASE`, `RHEL_EUS`, `RHEL_BASESAPAPPS`, `RHEL_SAPHA`, `RHEL_BASESAPAPPS`, `RHEL_BASESAPHA`, `SLES_STANDARD`, `SLES_SAP`, `SLES_HPC`.
+ `RHEL_BASE`, `RHEL_EUS`, `RHEL_BASESAPAPPS`, `RHEL_SAPHA`, `RHEL_BASESAPAPPS`, `RHEL_BASESAPHA`, `SLES`, `SLES_SAP`, `SLES_HPC`.
```azurecli az vm get-instance-view -g MyResourceGroup -n MyVm
A: On using AHB for BYOS VMs, you will essentially convert your bring your own s
| RHEL_BASESAPAPPS | [RHEL for SAP Business Applications](https://azure.microsoft.com/pricing/details/virtual-machines/rhel-sap-business/) | | RHEL_BASESAPHA | [RHEL for SAP with HA](https://azure.microsoft.com/pricing/details/virtual-machines/rhel-sap-ha/) | | RHEL_EUS | [Red Hat Enterprise Linux](https://azure.microsoft.com/pricing/details/virtual-machines/red-hat/) |
-| SLES_ STANDARD | [SLES Standard](https://azure.microsoft.com/pricing/details/virtual-machines/sles-standard/) |
+| SLES | [SLES Standard](https://azure.microsoft.com/pricing/details/virtual-machines/sles-standard/) |
| SLES_SAP | [SLES SAP](https://azure.microsoft.com/pricing/details/virtual-machines/sles-sap/) | | SLES_HPC | [SLES HPC](https://azure.microsoft.com/pricing/details/virtual-machines/sles-hpc-standard/) |
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command-managed.md
az vm run-command create --name "myRunCommand" --vm-name "myVM" --resource-group
This command will return a full list of previously deployed Run Commands along with their properties. ```azurecli-interactive
-az vm run-command list --name "myVM" --resource-group "myRG"
+az vm run-command list --vm-name "myVM" --resource-group "myRG"
``` ### Get execution status and results
virtual-network What Is Ip Address 168 63 129 16 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/what-is-ip-address-168-63-129-16.md
The public IP address 168.63.129.16 is used in all regions and all national clou
- When the VM is part of a load balancer backend pool, [health probe](../load-balancer/load-balancer-custom-probe-overview.md) communication should be allowed to originate from 168.63.129.16. The default network security group configuration has a rule that allows this communication. This rule leverages the [AzureLoadBalancer](../virtual-network/service-tags-overview.md#available-service-tags) service tag. If desired this traffic can be blocked by configuring the network security group however this will result in probes that fail.
+## Troubleshoot connectivity
+### Windows OS
+You can test communication to 168.63.129.16 by using the following tests with PowerShell.
+
+```
+Test-NetConnection -ComputerName 168.63.129.16 -Port 80
+Test-NetConnection -ComputerName 168.63.129.16 -Port 32526
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri http://168.63.129.16/?comp=versions
+```
+Results should return as shown below.
+
+```
+Test-NetConnection -ComputerName 168.63.129.16 -Port 80
+ComputerName : 168.63.129.16
+RemoteAddress : 168.63.129.16
+RemotePort : 80
+InterfaceAlias : Ethernet
+SourceAddress : 10.0.0.4
+TcpTestSucceeded : True
+```
+
+```
+Test-NetConnection -ComputerName 168.63.129.16 -Port 32526
+ComputerName : 168.63.129.16
+RemoteAddress : 168.63.129.16
+RemotePort : 32526
+InterfaceAlias : Ethernet
+SourceAddress : 10.0.0.4
+TcpTestSucceeded : True
+```
+
+```
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri http://168.63.129.16/?comp=versions
+xml Versions
+ --
+version="1.0" encoding="utf-8" Versions
+```
+You can also test communication to 168.63.129.16 by using telnet or psping.
+
+If successful, telnet should connect and the file that is created will be empty.
+
+```
+telnet 168.63.129.16 80 >> C:\<<EDIT-DIRECTORY>>\168-63-129-16_test-port80.txt
+telnet 168.63.129.16 32526 >> C:\<<EDIT-DIRECTORY>>\168-63-129-16_test--port32526.txt
+```
+
+```
+Psping 168.63.129.16:80 >> C:\<<EDIT-DIRECTORY>>\168-63-129-16_test--port80.txt
+Psping 168.63.129.16:32526 >> C:\<<EDIT-DIRECTORY>>\168-63-129-16_test-port32526.txt
+```
+### Linux OS
+On Linux, you can test communication to 168.63.129.16 by using the following tests.
+
+```
+echo "Testing 80 168.63.129.16 Port 80" > 168-63-129-16_test.txt
+traceroute -T -p 80 168.63.129.16 >> 168-63-129-16_test.txt
+echo "Testing 80 168.63.129.16 Port 32526" >> 168-63-129-16_test.txt
+traceroute -T -p 32526 168.63.129.16 >> 168-63-129-16_test.txt
+echo "Test 168.63.129.16 Versions" >> 168-63-129-16_test.txt
+curl http://168.63.129.16/?comp=versions >> 168-63-129-16_test.txt
+```
+
+Results inside 168-63-129-16_test.txt should return as shown below.
+
+```
+traceroute -T -p 80 168.63.129.16
+traceroute to 168.63.129.16 (168.63.129.16), 30 hops max, 60 byte packets
+1 168.63.129.16 (168.63.129.16) 0.974 ms 1.085 ms 1.078 ms
+
+traceroute -T -p 32526 168.63.129.16
+traceroute to 168.63.129.16 (168.63.129.16), 30 hops max, 60 byte packets
+1 168.63.129.16 (168.63.129.16) 0.883 ms 1.004 ms 1.010 ms
+
+curl http://168.63.129.16/?comp=versions
+<?xml version="1.0" encoding="utf-8"?>
+<Versions>
+<Preferred>
+<Version>2015-04-05</Version>
+</Preferred>
+<Supported>
+<Version>2015-04-05</Version>
+<Version>2012-11-30</Version>
+<Version>2012-09-15</Version>
+<Version>2012-05-15</Version>
+<Version>2011-12-31</Version>
+<Version>2011-10-15</Version>
+<Version>2011-08-31</Version>
+<Version>2011-04-07</Version>
+<Version>2010-12-15</Version>
+<Version>2010-28-10</Version>
+</Supported>
+```
+ ## Next steps - [Security groups](./network-security-groups-overview.md)-- [Create, change, or delete a network security group](manage-network-security-group.md)
+- [Create, change, or delete a network security group](manage-network-security-group.md)