Updates from: 09/13/2021 03:03:34
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Ropc Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-ropc-policy.md
Previously updated : 06/16/2021 Last updated : 09/12/2021
zone_pivot_groups: b2c-policy-type
In Azure Active Directory B2C (Azure AD B2C), the resource owner password credentials (ROPC) flow is an OAuth standard authentication flow. In this flow, an application, also known as the relying party, exchanges valid credentials for tokens. The credentials include a user ID and password. The tokens returned are an ID token, access token, and a refresh token. -- ## ROPC flow notes In Azure Active Directory B2C (Azure AD B2C), the following options are supported:
active-directory-b2c Contentdefinitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/contentdefinitions.md
Previously updated : 08/04/2021 Last updated : 09/12/2021
The **DataUri** element is used to specify the page identifier. Azure AD B2C use
You can enable [JavaScript client-side code](javascript-and-page-layout.md) by inserting `contract` between `elements` and the page type. For example, `urn:com:microsoft:aad:b2c:elements:contract:page-name:version`. - The [version](page-layout.md) part of the `DataUri` specifies the package of content containing HTML, CSS, and JavaScript for the user interface elements in your policy. If you intend to enable JavaScript client-side code, the elements you base your JavaScript on must be immutable. If they're not immutable, any changes could cause unexpected behavior on your user pages. To prevent these issues, enforce the use of a page layout and specify a page layout version. Doing so ensures that all content definitions youΓÇÖve based your JavaScript on are immutable. Even if you donΓÇÖt intend to enable JavaScript, you still need to specify the page layout version for your pages. The following example shows the **DataUri** of `selfasserted` version `1.2.0`:
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 06/21/2021 Last updated : 09/12/2021
The following table summarizes the OAuth 2.0 and OpenId Connect application auth
[On-behalf-of](../active-directory/develop/v2-oauth2-on-behalf-of-flow.md)| NA | NA | An application invokes a service or web API, which in turn needs to call another service or web API. <br /> <br /> For the middle-tier service to make authenticated requests to the downstream service, pass a *client credential* token in the authorization header. Optionally, you can include a custom header with the Azure AD B2C user's token. | [OpenId Connect](openid-connect.md) | GA | GA | OpenID Connect introduces the concept of an ID token, which is a security token that allows the client to verify the identity of the user. | [OpenId Connect hybrid flow](openid-connect.md) | GA | GA | Allows a web application retrieve the ID token on the authorize request along with an authorization code. |
-[Resource owner password credentials (ROPC)](add-ropc-policy.md) | Preview | Preview | Allows a mobile application to sign in the user by directly handling their password. |
+[Resource owner password credentials (ROPC)](add-ropc-policy.md) | GA | GA | Allows a mobile application to sign in the user by directly handling their password. |
+| [Sign-out](session-behavior.md#sign-out)| GA | GA | |
+| [Single sign-out](session-behavior.md#sign-out) | NA | Preview | |
### OAuth 2.0 options
The following table summarizes the OAuth 2.0 and OpenId Connect application auth
| Insert JSON into user journey via `client_assertion`| NA| Deprecated | | | Insert JSON into user journey as [id_token_hint](id-token-hint.md) | NA | GA | | | [Pass identity provider token to the application](idp-pass-through-user-flow.md)| Preview| Preview| For example, from Facebook to app. |
+| [Keep me signed in (KMSI)](session-behavior.md#enable-keep-me-signed-in-kmsi)| GA| GA| |
## SAML2 application authentication flows
The following table summarizes the Security Assertion Markup Language (SAML) app
|Feature |User flow |Custom policy |Notes | ||::|::|| | [Multi-language support](localization.md)| GA | GA | |
+| [Custom domains](custom-domain.md)| Preview | Preview | |
| [Custom email verification](custom-email-mailjet.md) | NA | GA| | | [Customize the user interface with built-in templates](customize-ui.md) | GA| GA| | | [Customize the user interface with custom templates](customize-ui-with-html.md) | GA| GA| By using HTML templates. |
The following table summarizes the Security Assertion Markup Language (SAML) app
+ ## Identity providers |Feature |User flow |Custom policy |Notes |
The following table summarizes the Security Assertion Markup Language (SAML) app
| [External login session provider](custom-policy-reference-sso.md#externalloginssosessionprovider) | GA | | | [SAML SSO session provider](custom-policy-reference-sso.md#samlssosessionprovider) | GA | | | [OAuth SSO Session Provider](custom-policy-reference-sso.md#oauthssosessionprovider) | GA| |
-| [Single sign-out](session-behavior.md#sign-out) | Preview | |
+ ### Components
The following table summarizes the Security Assertion Markup Language (SAML) app
| [Azure Active Directory](active-directory-technical-profile.md) as local directory | GA | | | [Predicate validations](predicates.md) | GA | For example, password complexity. | | [Display controls](display-controls.md) | GA | |
+| [Sub journeys](subjourneys.md) | GA | |
### Developer interface
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Previously updated : 08/09/2021 Last updated : 09/12/2021
If you want to get the `family_name` and `given_name` claims from Azure AD, you
- **Display name**: *name* - **Given name**: *given_name* - **Surname**: *family_name*
- - **Email**: *preferred_username*
+ - **Email**: *email*
1. Select **Save**.
If the sign-in process is successful, your browser is redirected to `https://jwt
## Next steps
-Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md).
+Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md).
active-directory Secure Least Privileged Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/secure-least-privileged-access.md
Title: Best practices for least privileged access on Azure AD - Microsoft identity platform
-description: Learn about a set of best practices and general guidance for least privilege.
+ Title: "Increase app security with the principle of least privilege"
+
+description: Learn how the principle of least privilege can help increase the security of your application, its data, and which features of the Microsoft identity platform you can use to implement least privileged access.
-
+ - Previously updated : 04/26/2021+ Last updated : 09/09/2021
-#Customer intent: As a developer, I want to learn how to stay least privileged and require just enough permissions for my application.
+# Customer intent: As a developer, I want to learn about the principle of least privilege and the features of the Microsoft identity platform that I can use to ensure my application and its users are restricted to actions and have access to only the data they need perform their tasks.
-# Best practices for least privileged access for applications
+# Enhance security with the principle of least privilege
+
+The information security principle of least privilege asserts that users and applications should be granted access only to the data and operations they require to perform their jobs.
+
+Follow the guidance here to help reduce your application's attack surface and the impact of a security breach (the *blast radius*) should one occur in your Microsoft identity platform-integrated application.
+
+## Recommendations at a glance
+
+- Prevent **overprivileged** applications by revoking *unused* and *reducible* permissions.
+- Use the identity platform's **consent** framework to require that a human consents to the app's request to access protected data.
+- **Build** applications with least privilege in mind during all stages of development.
+- **Audit** your deployed applications periodically to identify overprivileged apps.
+
+## What's an *overprivileged* application?
+
+Any application that's been granted an **unused** or **reducible** permission is considered "overprivileged." Unused and reducible permissions have the potential to provide unauthorized or unintended access to data or operations not required by the app or its users to perform their jobs.
+
+ :::column span="":::
+ ### Unused permissions
+
+ An unused permission is a permission that's been granted to an application but whose API or operation exposed by that permission isn't called by the app when used as intended.
+
+ - **Example**: An application displays a list of files stored in the signed-in user's OneDrive by calling the Microsoft Graph API and leveraging the [Files.Read](/graph/permissions-reference) permission. However, the app has also been granted the [Calendars.Read](/graph/permissions-reference#calendars-permissions) permission, yet it provides no calendar features and doesn't call the Calendars API.
-The principle of least privilege is an information security concept, which enforces the idea that users and applications should be granted the minimum level of access needed to perform required tasks. Understanding the principle of least privilege helps you build trustworthy applications for your customers.
+ - **Security risk**: Unused permissions pose a *horizontal privilege escalation* security risk. An entity that exploits a security vulnerability in your application could use an unused permission to gain access to an API or operation not normally supported or allowed by the application when it's used as intended.
-Least privilege adoption is more than just a good security practice. The concept helps you preserve the integrity and security of your data. It also protects the privacy of your data and reduces risks by preventing applications from having access to data any more than absolutely needed. Looked at on a broader level, the adoption of the least privilege principle is one of the ways organizations can embrace proactive security with [Zero Trust](https://www.microsoft.com/security/business/zero-trust).
+ - **Mitigation**: Remove any permission that isn't used in API calls made by your application.
+ :::column-end:::
+ :::column span="":::
+ ### Reducible permissions
-This article describes a set of best practices that you can use to adopt the least privilege principle to make your applications more secure for end users. You'll get to understand the following aspects of least privilege:
-- How consent works with permissions-- What it means for an app to be overprivileged or least privileged-- How to approach least privilege as a developer-- How to approach least privilege as an organization
+ A reducible permission is a permission that has a lower-privileged counterpart that would still provide the application and its users the access they need to perform their required tasks.
-## Using consent to control access permissions to data
+ - **Example**: An application displays the signed-in user's profile information by calling the Microsoft Graph API, but doesn't support profile editing. However, the app has been granted the [User.ReadWrite.All](/graph/permissions-reference#user-permissions) permission. The *User.ReadWrite.All* permission is considered reducible here because the less permissive *User.Read.All* permission grants sufficient read-only access to user profile data.
-Access to protected data requires [consent](../develop/application-consent-experience.md#consent-and-permissions) from the end user. Whenever an application that runs in your user's device requests access to protected data, the app should ask for the user's consent before granting access to the protected data. The end user is required to grant (or deny) consent for the requested permission before the application can progress. As an application developer, it's best to request access permission with the least privilege.
+ - **Security risk**: Reducible permissions pose a *vertical privilege escalation* security risk. An entity that exploits a security vulnerability in your application could use the reducible permission for unauthorized access to data or to perform operations not normally allowed by that entity's role.
+ - **Mitigation**: Replace each reducible permission in your application with its least-permissive counterpart still enabling the application's intended functionality.
+ :::column-end:::
-## Overprivileged and least privileged applications
+Avoid security risks posed by unused and reducible permissions by granting *just enough* permission: the permission with the least-permissive access required by an application or user to perform their required tasks.
-An overprivileged application may have one of the following characteristics:
-- **Unused permissions**: An application could end up with unused permissions when it fails to make API calls that utilize all the permissions granted to it. For example in [MS Graph](/graph/overview), an app might only be reading OneDrive files (using the "*Files.Read.All*" permission) but has also been granted ΓÇ£*Calendars.Read*ΓÇ¥ permission, despite not integrating with any Calendar APIs.-- **Reducible permissions**: An app has reducible permission when the granted permission has a lesser privileged replacement that can complete the desired API call. For example, an app that is only reading User profiles, but has been granted "*User.ReadWrite.All*" might be considered overprivileged. In this case, the app should be granted "*User.Read.All*" instead, which is the least privileged permission needed to satisfy the request.
+## Use consent to control access to data
-For an application to be considered as least privileged, it should have:
-- **Just enough permissions**: Grant only the minimum set of permissions required by an end user of an application, service, or system to perform the required tasks.
+Most applications you build will require access to protected data, and the owner of that data needs to [consent](application-consent-experience.md#consent-and-permissions) that access. Consent can be granted in several ways, including by a tenant administrator who can consent for *all* users in an Azure AD tenant, or by the application users themselves who can grant access
-## Approaching least privilege as an application developer
+Whenever an application that runs in your user's device requests access to protected data, the app should ask for the user's consent before granting access to the protected data. The end user is required to grant (or deny) consent for the requested permission before the application can progress.
-As a developer, you have a responsibility to contribute to the security of your customer's data. When developing your applications, you need to adopt the principle of least privilege. We recommend that you follow these steps to prevent your application from being overprivileged:
-- Fully understand the permissions required for the API calls that your application needs to make-- Understand the least privileged permission for each API call that your app needs to make using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer)-- Find the corresponding [permissions](/graph/permissions-reference) from least to most privileged-- Remove any duplicate sets of permissions in cases where your app makes API calls that have overlapping permissions-- Apply only the least privileged set of permissions to your application by choosing the least privileged permission in the permission list
-## Approaching least privilege as an organization
+## Least privilege during app development
-Organizations often hesitate to modify existing applications as it might affect business operations, but that presents a challenge when already granted permissions are overprivileged and need to be revoked. As an organization, it's good practice to check and review your permissions regularly. We recommend you follow these steps to make your applications stay healthy:
-- Evaluate the API calls being made from your applications-- Use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) and the [Microsoft Graph](/graph/overview) documentation for the required and least privileged permissions-- Audit privileges that are granted to users or applications-- Update your applications with the least privileged permission set-- Conduct permissions review regularly to make sure all authorized permissions are still relevant
+As a developer building an application, consider the security of your app and its users' data to be *your* responsibility.
+
+Adhere to these guidelines during application development to help avoid building an overprivileged app:
+
+- Fully understand the permissions required for the API calls that your application needs to make.
+- Understand the least privileged permission for each API call that your app needs to make using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+- Find the corresponding [permissions](/graph/permissions-reference) from least to most privileged.
+- Remove any duplicate sets of permissions in cases where your app makes API calls that have overlapping permissions.
+- Apply only the least privileged set of permissions to your application by choosing the least privileged permission in the permission list.
+
+## Least privilege for deployed apps
+
+Organizations often hesitate to modify running applications to avoid impacting their normal business operations. However, your organization should consider mitigating the risk of a security incident made possible or more severe by your app's overprivileged permissions to be worthy of a scheduled application update.
+
+Make these standard practices in your organization to help ensure your deployed apps aren't overprivileged and don't become overprivileged over time:
+
+- Evaluate the API calls being made from your applications.
+- Use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) and the [Microsoft Graph](/graph/overview) documentation for the required and least privileged permissions.
+- Audit privileges that are granted to users or applications.
+- Update your applications with the least privileged permission set.
+- Conduct permissions reviews regularly to make sure all authorized permissions are still relevant.
## Next steps -- For more information on consent and permissions in Azure Active Directory, see [Understanding Azure AD application consent experiences](../develop/application-consent-experience.md).-- For more information on permissions and consent in Microsoft identity, see [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md).-- For more information on Zero Trust, see [Zero Trust Deployment Center](/security/zero-trust/).
+**Protected resource access and consent**
+
+For more information about configuring access to protected resources and the user experience of providing consent to access those protected resources, see the following articles:
+
+- [Permissions and consent in the Microsoft identity platform](../develop/v2-permissions-and-consent.md)
+- [Understanding Azure AD application consent experiences](../develop/application-consent-experience.md)
+
+**Zero Trust** - Consider employing the least-privilege measures described here as part of your organization's proactive [Zero Trust security strategy](/security/zero-trust/).
active-directory How To Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso.md
For more information on how SSO works with Windows 10 using PRT, see: [Primary R
\*\*\*Requires [additional configuration](how-to-connect-sso-quick-start.md#browser-considerations).
-\*\*\*\*Microosft Edge based on Chromium
+\*\*\*\*Microsoft Edge based on Chromium
## Next steps
active-directory Alexishr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/alexishr-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with AlexisHR'
+description: Learn how to configure single sign-on between Azure Active Directory and AlexisHR.
++++++++ Last updated : 09/08/2021++++
+# Tutorial: Azure AD SSO integration with AlexisHR
+
+In this tutorial, you'll learn how to integrate AlexisHR with Azure Active Directory (Azure AD). When you integrate AlexisHR with Azure AD, you can:
+
+* Control in Azure AD who has access to AlexisHR.
+* Enable your users to be automatically signed-in to AlexisHR with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* AlexisHR single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* AlexisHR supports **IDP** initiated SSO.
+
+## Add AlexisHR from the gallery
+
+To configure the integration of AlexisHR into Azure AD, you need to add AlexisHR from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **AlexisHR** in the search box.
+1. Select **AlexisHR** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for AlexisHR
+
+Configure and test Azure AD SSO with AlexisHR using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AlexisHR.
+
+To configure and test Azure AD SSO with AlexisHR, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure AlexisHR SSO](#configure-alexishr-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AlexisHR test user](#create-alexishr-test-user)** - to have a counterpart of B.Simon in AlexisHR that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **AlexisHR** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `urn:auth0:alexishr:<YOUR_CONNECTION_NAME>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://auth.alexishr.com/login/callback?connection=<YOUR_CONNECTION_NAME>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [AlexisHR Client support team](mailto:support@alexishr.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. AlexisHR application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, AlexisHR application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre-populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | | |
+ | email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up AlexisHR** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AlexisHR.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **AlexisHR**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure AlexisHR SSO
+
+1. Log in to your AlexisHR company site as an administrator.
+
+1. Go to **Settings** > **SAML Single sign-on** and click **New identity provider**.
+
+1. In the **New identity provider** section, perform the following steps:
+
+ ![Screenshot shows the Account Settings.](./media/alexishr-tutorial/account.png " Settings")
+
+ 1. In the **Identity provider SSO URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ 1. In the **Identity provider sign out URL** textbox, paste the **Logout URL** value which you have copied from the Azure portal.
+
+ 1. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **Public x509 certificate** textbox.
+
+ 1. Click **Create identity provider**.
+
+1. After creating identity provider, you will receive the following information.
+
+ ![Screenshot shows the SSO Settings.](./media/alexishr-tutorial/certificate.png "SSO configuration")
+
+ 1. Copy **Audience URI** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Copy **Assertion Consumer Service URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+### Create AlexisHR test user
+
+In this section, you create a user called Britta Simon in AlexisHR. Work with [AlexisHR support team](mailto:support@alexishr.com) to add the users in the AlexisHR platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the AlexisHR for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the AlexisHR tile in the My Apps, you should be automatically signed in to the AlexisHR for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure AlexisHR you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Crowdstrike Falcon Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/crowdstrike-falcon-platform-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with CrowdStrike Falcon Platform'
+description: Learn how to configure single sign-on between Azure Active Directory and CrowdStrike Falcon Platform.
++++++++ Last updated : 09/02/2021++++
+# Tutorial: Azure AD SSO integration with CrowdStrike Falcon Platform
+
+In this tutorial, you'll learn how to integrate CrowdStrike Falcon Platform with Azure Active Directory (Azure AD). When you integrate CrowdStrike Falcon Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to CrowdStrike Falcon Platform.
+* Enable your users to be automatically signed-in to CrowdStrike Falcon Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* A valid CrowdStrike Falcon subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* CrowdStrike Falcon Platform supports **SP and IDP** initiated SSO.
+
+## Adding CrowdStrike Falcon Platform from the gallery
+
+To configure the integration of CrowdStrike Falcon Platform into Azure AD, you need to add CrowdStrike Falcon Platform from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **CrowdStrike Falcon Platform** in the search box.
+1. Select **CrowdStrike Falcon Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for CrowdStrike Falcon Platform
+
+Configure and test Azure AD SSO with CrowdStrike Falcon Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CrowdStrike Falcon Platform.
+
+To configure and test Azure AD SSO with CrowdStrike Falcon Platform, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure CrowdStrike Falcon Platform SSO](#configure-crowdstrike-falcon-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create CrowdStrike Falcon Platform test user](#create-crowdstrike-falcon-platform-test-user)** - to have a counterpart of B.Simon in CrowdStrike Falcon Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **CrowdStrike Falcon Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type one of the following URLs:
+
+ | Identifier |
+ | -- |
+ | `https://falcon.crowdstrike.com/saml/metadata` |
+ | `https://falcon.us-2.crowdstrike.com/saml/metadata` |
+ | `https://falcon.eu-1.crowdstrike.com/saml/metadata` |
+ | `https://falcon.laggar.gcw.crowdstrike.com/saml/metadata` |
+ |
+
+ b. In the **Reply URL** text box, type one of the following URLs:
+
+ | Reply URL |
+ | -- |
+ | `https://falcon.crowdstrike.com/saml/acs` |
+ | `https://falcon.us-2.crowdstrike.com/saml/acs` |
+ | `https://falcon.eu-1.crowdstrike.com/saml/acs` |
+ | `https://falcon.laggar.gcw.crowdstrike.com/saml/acs` |
+ |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type one of the following URLs:
+
+ | Sign-on URL |
+ | -- |
+ | `https://falcon.crowdstrike.com/login` |
+ | `https://falcon.us-2.crowdstrike.com/login` |
+ | `https://falcon.eu-1.crowdstrike.com/login` |
+ | `https://falcon.laggar.gcw.crowdstrike.com/login` |
+ |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to CrowdStrike Falcon Platform.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **CrowdStrike Falcon Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure CrowdStrike Falcon Platform SSO
+
+To configure single sign-on on **CrowdStrike Falcon Platform** side, you need to send the **App Federation Metadata Url** to [CrowdStrike Falcon Platform support team](mailto:support@crowdstrike.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create CrowdStrike Falcon Platform test user
+
+In this section, you create a user called Britta Simon in CrowdStrike Falcon Platform. Work with [CrowdStrike Falcon Platform support team](mailto:support@crowdstrike.com) to add the users in the CrowdStrike Falcon Platform platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to CrowdStrike Falcon Platform Sign on URL where you can initiate the login flow.
+
+* Go to CrowdStrike Falcon Platform Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the CrowdStrike Falcon Platform for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the CrowdStrike Falcon Platform tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the CrowdStrike Falcon Platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
++
+## Next steps
+
+Once you configure CrowdStrike Falcon Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
++
active-directory Linkedinlearning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/linkedinlearning-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with LinkedIn Learning | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with LinkedIn Learning'
description: Learn how to configure single sign-on between Azure Active Directory and LinkedIn Learning.
Previously updated : 06/29/2021 Last updated : 09/01/2021
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with LinkedIn Learning
+# Tutorial: Azure AD SSO integration with LinkedIn Learning
In this tutorial, you'll learn how to integrate LinkedIn Learning with Azure Active Directory (Azure AD). When you integrate LinkedIn Learning with Azure AD, you can:
In this tutorial, you configure and test Azure AD SSO in a test environment.
* LinkedIn Learning supports **SP and IDP** initiated SSO. * LinkedIn Learning supports **Just In Time** user provisioning.
+* LinkedIn Learning supports [Automated user provisioning](linkedin-learning-provisioning-tutorial.md).
## Add LinkedIn Learning from the gallery
active-directory Logmein Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/logmein-tutorial.md
Previously updated : 06/18/2021 Last updated : 09/01/2021 # Tutorial: Azure Active Directory single sign-on (SSO) integration with LogMeIn
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * LogMeIn supports **SP and IDP** initiated SSO.
+* LogMeIn supports [Automated user provisioning](logmein-provisioning-tutorial.md).
## Adding LogMeIn from the gallery
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot for user fields.](./media/logmein-tutorial/create-user.png)
+> [!NOTE]
+> LogMeIn also supports automatic user provisioning, you can find more details [here](./logmein-provisioning-tutorial.md) on how to configure automatic user provisioning.
+ ## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-configuration.md
As part of creating an AKS cluster, you may need to customize your cluster confi
## OS configuration
-AKS now supports Ubuntu 18.04 as the default node operating system (OS) in general availability (GA) for clusters in kubernetes versions higher than 1.18 For versions below 1.18, AKS Ubuntu 16.04 is still the default base image. From kubernetes v1.18 and higher, the default base is AKS Ubuntu 18.04.
-
-> [!IMPORTANT]
-> Node pools created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the node pool Kubernetes version is updated to v1.18 or greater.
->
-> It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater.
--
-### Use AKS Ubuntu 18.04 (GA) on new clusters
-
-Clusters created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 will still receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the cluster or node pool Kubernetes version is updated to v1.18 or greater.
-
-It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater.
-
-To create a cluster using `AKS Ubuntu 18.04` node image, simply create a cluster running kubernetes v1.18 or greater as shown below
-
-```azurecli
-az aks create --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14
-```
-
-### Use AKS Ubuntu 18.04 (GA) on existing clusters
-
-Clusters created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 will still receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the cluster or node pool Kubernetes version is updated to v1.18 or greater.
-
-It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to using clusters on 1.18 or greater.
-
-If your clusters or node pools are ready for `AKS Ubuntu 18.04` node image, you can simply upgrade them to a v1.18 or higher as below.
-
-```azurecli
-az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14
-```
-
-If you just want to upgrade just one node pool:
-
-```azurecli
-az aks nodepool upgrade -name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14
-```
-
-### Test AKS Ubuntu 18.04 (GA) on existing clusters
-
-Node pools created on Kubernetes v1.18 or greater default to `AKS Ubuntu 18.04` node image. Node pools on a supported Kubernetes version less than 1.18 will still receive `AKS Ubuntu 16.04` as the node image, but will be updated to `AKS Ubuntu 18.04` once the node pool Kubernetes version is updated to v1.18 or greater.
-
-It is highly recommended to test your workloads on AKS Ubuntu 18.04 node pools prior to upgrading your production node pools.
-
-To create a node pool using `AKS Ubuntu 18.04` node image, simply create a node pool running kubernetes v1.18 or greater. Your cluster control plane needs to be at least on v1.18 or greater as well but your other node pools can remain on an older kubernetes version.
-Below we are first upgrading the control plane and then creating a new node pool with v1.18 that will receive the new node image OS version.
-
-```azurecli
-az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14 --control-plane-only
-
-az aks nodepool add --name ubuntu1804 --cluster-name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.18.14
-```
+AKS supports Ubuntu 18.04 as the default node operating system (OS) in general availability (GA) for clusters.
## Container runtime configuration
-A container runtime is software that executes containers and manages container images on a node. The runtime helps abstract away sys-calls or operating system (OS) specific functionality to run containers on Linux or Windows. For Linux node pools, `containerd` is used for node pools using Kubernetes version 1.19 and greater, and Docker is used for node pools using Kubernetes 1.18 and earlier. For Windows Server 2019 node pools, `containerd` is available in preview and can be used in node pools using Kubernetes 1.20 and greater, but Docker is still used by default.
+A container runtime is software that executes containers and manages container images on a node. The runtime helps abstract away sys-calls or operating system (OS) specific functionality to run containers on Linux or Windows. For Linux node pools, `containerd` is used for node pools using Kubernetes version 1.19 and greater. For Windows Server 2019 node pools, `containerd` is available in preview and can be used in node pools using Kubernetes 1.20 and greater, but Docker is still used by default.
[`Containerd`](https://containerd.io/) is an [OCI](https://opencontainers.org/) (Open Container Initiative) compliant core container runtime that provides the minimum set of required functionality to execute containers and manage images on a node. It was [donated](https://www.cncf.io/announcement/2017/03/29/containerd-joins-cloud-native-computing-foundation/) to the Cloud Native Compute Foundation (CNCF) in March of 2017. The current Moby (upstream Docker) version that AKS uses already leverages and is built on top of `containerd`, as shown above.
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-secrets-store-driver.md
The minimum recommended Kubernetes version for this feature is 1.18.
- Supports CSI Inline volumes (Kubernetes version v1.15+) - Supports mounting multiple secrets store objects as a single volume - Supports pod portability with the SecretProviderClass CRD-- Supports windows containers (Kubernetes version v1.18+)
+- Supports windows containers
- Sync with Kubernetes Secrets (Secrets Store CSI Driver v0.0.10+) - Supports auto rotation of mounted contents and synced Kubernetes secrets (Secrets Store CSI Driver v0.0.15+)
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/custom-node-configuration.md
The supported Kubelet parameters and accepted values are listed below.
| `cpuCfsQuotaPeriod` | Interval in milliseconds (ms) | `100ms` | Sets CPU CFS quota period value. | | `imageGcHighThreshold` | 0-100 | 85 | The percent of disk usage after which image garbage collection is always run. Minimum disk usage that **will** trigger garbage collection. To disable image garbage collection, set to 100. | | `imageGcLowThreshold` | 0-100, no higher than `imageGcHighThreshold` | 80 | The percent of disk usage before which image garbage collection is never run. Minimum disk usage that **can** trigger garbage collection. |
-| `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). Only kubernetes v1.18+. |
+| `topologyManagerPolicy` | none, best-effort, restricted, single-numa-node | none | Optimize NUMA node alignment, see more [here](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/). |
| `allowedUnsafeSysctls` | `kernel.shm*`, `kernel.msg*`, `kernel.sem`, `fs.mqueue.*`, `net.*` | None | Allowed list of unsafe sysctls or unsafe sysctl patterns. | | `containerLogMaxSizeMB` | Size in megabytes (MB) | 10 MB | The maximum size (for example, 10 MB) of a container log file before it's rotated. | | `containerLogMaxFiles` | ≥ 2 | 5 | The maximum number of container log files that can be present for a container. |
aks Developer Best Practices Pod Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/developer-best-practices-pod-security.md
When applications need a credential, they communicate with the digital vault, re
With Key Vault, you store and regularly rotate secrets such as credentials, storage account keys, or certificates. You can integrate Azure Key Vault with an AKS cluster using the [Azure Key Vault provider for the Secrets Store CSI Driver](https://github.com/Azure/secrets-store-csi-driver-provider-azure#usage). The Secrets Store CSI driver enables the AKS cluster to natively retrieve secret contents from Key Vault and securely provide them only to the requesting pod. Work with your cluster operator to deploy the Secrets Store CSI Driver onto AKS worker nodes. You can use a pod managed identity to request access to Key Vault and retrieve the secret contents needed through the Secrets Store CSI Driver.
-Azure Key Vault with Secrets Store CSI Driver can be used for Linux nodes and pods which require a Kubernetes version of 1.16 or greater. For Windows nodes and pods a Kubernetes version of 1.18 or greater is required.
## Next steps
aks Kubernetes Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-dashboard.md
For more information on the Kubernetes dashboard, see [Kubernetes Web UI Dashboa
> [!WARNING] > **The AKS dashboard add-on is set for deprecation. Use the [Kubernetes resource view in the Azure portal (preview)][kubernetes-portal] instead.**
-> * The Kubernetes dashboard is enabled by default for clusters running a Kubernetes version less than 1.18.
-> * The dashboard add-on will be disabled by default for all new clusters created on Kubernetes 1.18 or greater.
- > * Starting with Kubernetes 1.19 in preview, AKS will no longer support installation of the managed kube-dashboard addon.
- > * Existing clusters with the add-on enabled will not be impacted. Users will continue to be able to manually install the open-source dashboard as user-installed software.
+> * The dashboard add-on will be disabled by default for all new clusters.
+> * Starting with Kubernetes 1.19 in preview, AKS will no longer support installation of the managed kube-dashboard addon.
+> * Existing clusters with the add-on enabled will not be impacted. Users will continue to be able to manually install the open-source dashboard as user-installed software.
## Before you begin
You also need the Azure CLI version 2.6.0 or later installed and configured. Run
## Disable the Kubernetes dashboard
-The kube-dashboard addon is **enabled by default on clusters older than K8s 1.18**. The addon can be disabled by running the following command.
+The addon can be disabled by running the following command.
``` azurecli az aks disable-addons -g myRG -n myAKScluster -a kube-dashboard ```
-## Start the Kubernetes dashboard
-
-> [!WARNING]
-> The AKS dashboard add-on is deprecated for versions 1.19+. Please use the [Kubernetes resource view in the Azure portal (preview)][kubernetes-portal] instead.
-> * The following command will now open the Azure Portal resource view instead of the kubernetes dashboard for versions 1.19 and above.
-
-To start the Kubernetes dashboard on a cluster, use the [az aks browse][az-aks-browse] command. This command requires the installation of the kube-dashboard addon on the cluster, which is included by default on clusters running any version older than Kubernetes 1.18.
-
-The following example opens the dashboard for the cluster named *myAKSCluster* in the resource group named *myResourceGroup*:
-
-```azurecli
-az aks browse --resource-group myResourceGroup --name myAKSCluster
-```
-
-This command creates a proxy between your development system and the Kubernetes API, and opens a web browser to the Kubernetes dashboard. If a web browser doesn't open to the Kubernetes dashboard, copy and paste the URL address noted in the Azure CLI, typically `http://127.0.0.1:8001`.
-
-> [!NOTE]
-> If you do not see the dashboard at `http://127.0.0.1:8001` you can manually route to the following addresses. Clusters on 1.16 or greater use https and require a separate endpoint.
-> * K8s 1.16 or greater: `http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy`
-> * K8s 1.15 and below: `http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy`
- <!-- ![The login page of the Kubernetes web dashboard](./media/kubernetes-dashboard/dashboard-login.png)
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.18 | Mar-23-20 | May 2020 | Aug 2020 | *1.21 GA |
| 1.19 | Aug-04-20 | Sep 2020 | Nov 2020 | 1.22 GA | | 1.20 | Dec-08-20 | Jan 2021 | Mar 2021 | 1.23 GA | | 1.21 | Apr-08-21 | May 2021 | Jul 2021 | 1.24 GA | | 1.22 | Aug-04-21 | Sept 2021 | Oct 2021 | 1.25 GA | | 1.23 | Dec 2021 | Jan 2022 | Feb 2022 | 1.26 GA |
->[!NOTE]
->AKS version 1.18 will continue to be available until July 31st 2021. After this date, AKS will return to its regular three version window support. It is important to note the following as the support from June 30th to July 31st 2021 will be limited in scope. Below lists what the users will be limited to:
-> - Creation of new clusters and nodepools on 1.18.
-> - CRUD operations on 1.18 clusters.
-> - Azure Support of non-Kubernetes related, platform issues. Platform issues include trouble with networking, storage, or compute running on Azure. Any support requests for K8s patching and troubleshooting will be requested to upgrade into a supported version.
- ## FAQ **How does Microsoft notify me of new Kubernetes versions?**
-The AKS team publishes pre-announcements with planned dates of the new Kubernetes versions in our documentation, our [GitHub](https://github.com/Azure/AKS/releases) as well as emails to subscription administrators who own clusters that are going to fall out of support. In addition to announcements, AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to notify the customer inside the Azure Portal to alert users if they are out of support, as well as alerting them of deprecated APIs that will affect their application or development process.
+The AKS team publishes pre-announcements with planned dates of the new Kubernetes versions in our documentation, our [GitHub](https://github.com/Azure/AKS/releases) as well as emails to subscription administrators who own clusters that are going to fall out of support. In addition to announcements, AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to notify the customer inside the Azure portal to alert users if they are out of support, as well as alerting them of deprecated APIs that will affect their application or development process.
**How often should I expect to upgrade Kubernetes versions to stay in support?** Starting with Kubernetes 1.19, the [open source community has expanded support to 1 year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you will be able to upgrade at a minimum of once a year to stay on a supported version.
-For versions on 1.18 or below, the window of support remains at 9 months, requiring an upgrade once every 9 months to stay on a supported version. Regularly test new versions and be prepared to upgrade to newer versions to capture the latest stable enhancements within Kubernetes.
- **What happens when a user upgrades a Kubernetes cluster with a minor version that isn't supported?** If you're on the *n-3* version or older, it means you're outside of support and will be asked to upgrade. When your upgrade from version n-3 to n-2 succeeds, you're back within our support policies. For example:
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
description: To create highly available and resilient applications in Azure, Ava
Previously updated : 08/04/2021 Last updated : 09/10/2021
To achieve comprehensive business continuity on Azure, build your application ar
| [Azure Active Directory Domain Services](../active-directory-domain-services/overview.md) | :large_blue_diamond: | | [Azure API Management](../api-management/zone-redundancy.md) | :large_blue_diamond: | | [Azure App Configuration](../azure-app-configuration/faq.yml#how-does-app-configuration-ensure-high-data-availability) | :large_blue_diamond: |
+| [Azure Batch](/azure/batch/create-pool-availability-zones) | :large_blue_diamond: |
| [Azure Bastion](../bastion/bastion-overview.md) | :large_blue_diamond: | | [Azure Cache for Redis](../azure-cache-for-redis/cache-high-availability.md) | :large_blue_diamond: | | [Azure Cognitive Search](../search/search-performance-optimization.md#availability-zones) | :large_blue_diamond: |
To achieve comprehensive business continuity on Azure, build your application ar
| [Azure Disk Encryption](../virtual-machines/disks-redundancy.md) | :large_blue_diamond: | | [Azure Firewall](../firewall/deploy-availability-zone-powershell.md) | :large_blue_diamond: | | [Azure Firewall Manager](../firewall-manager/quick-firewall-policy.md) | :large_blue_diamond: |
+| [Azure Functions](https://azure.github.io/AppService/2021/08/25/App-service-support-for-availability-zones.html) | :large_blue_diamond: |
| [Azure Kubernetes Service (AKS)](../aks/availability-zones.md) | :large_blue_diamond: | | [Azure Media Services (AMS)](../media-services/latest/concept-availability-zones.md) | :large_blue_diamond: |
+| [Azure Monitor](/azure/azure-monitor/logs/availability-zones) | :large_blue_diamond: |
+| [Azure Monitor: Application Insights](/azure/azure-monitor/logs/availability-zones) | :large_blue_diamond: |
+| [Azure Monitor: Log Analytics](/azure/azure-monitor/logs/availability-zones) | :large_blue_diamond: |
| [Azure Private Link](../private-link/private-link-overview.md) | :large_blue_diamond: |
-| [Azure Site Recovery](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md) | :large_blue_diamond: |
-| Azure SQL: [Virtual Machine](../azure-sql/database/high-availability-sla.md) | :large_blue_diamond: |
-| [Azure Web Application Firewall](../firewall/deploy-availability-zone-powershell.md) | :large_blue_diamond: |
-| [Container Registry](../container-registry/zone-redundancy.md) | :large_blue_diamond: |
-| [Event Grid](../event-grid/overview.md) | :large_blue_diamond: |
-| [Network Watcher](/azure/network-watcher/frequently-asked-questions#service-availability-and-redundancy) | :large_blue_diamond: |
-| Network Watcher: [Traffic Analytics](/azure/network-watcher/frequently-asked-questions#service-availability-and-redundancy) | :large_blue_diamond: |
-| [Power BI Embedded](/power-bi/admin/service-admin-failover#what-does-high-availability) | :large_blue_diamond: |
+| [Azure Site Recovery](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md) | :large_blue_diamond: |
+| Azure SQL: [Virtual Machine](../azure-sql/database/high-availability-sla.md) | :large_blue_diamond: |
+| [Azure Web Application Firewall](../firewall/deploy-availability-zone-powershell.md) | :large_blue_diamond: |
+| [Container Registry](../container-registry/zone-redundancy.md) | :large_blue_diamond: |
+| [Event Grid](../event-grid/overview.md) | :large_blue_diamond: |
+| [HDInsight](/azure/hdinsight/hdinsight-use-availability-zones) | :large_blue_diamond: |
+| [Network Watcher](/azure/network-watcher/frequently-asked-questions#service-availability-and-redundancy) | :large_blue_diamond: |
+| Network Watcher: [Traffic Analytics](/azure/network-watcher/frequently-asked-questions#service-availability-and-redundancy) | :large_blue_diamond: |
+| [Power BI Embedded](/power-bi/admin/service-admin-failover#what-does-high-availability) | :large_blue_diamond: |
| [Premium Blob Storage](../storage/blobs/storage-blob-performance-tiers.md) | :large_blue_diamond: | | Storage: [Azure Premium Files](../storage/files/storage-files-planning.md) | :large_blue_diamond: | | Virtual Machines: [Azure Dedicated Host](../virtual-machines/windows/create-powershell-availability-zone.md) | :large_blue_diamond: |
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
Learn more about how to develop and configure Azure Functions.
<!-- LINKS -->
-[Function app on Consumption plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json
-[Function app on Azure App Service plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-simple-linux/azuredeploy.json
+[Function app on Consumption plan]: https://azure.microsoft.com/resources/templates/function-app-create-dynamic/
+[Function app on Azure App Service plan]: https://azure.microsoft.com/resources/templates/function-app-create-dedicated/
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-scale.md
The following is a summary of the benefits of the three main hosting plans for F
| Plan | Benefits | | | | |**[Consumption plan](consumption-plan.md)**| Scale automatically and only pay for compute resources when your functions are running.<br/><br/>On the Consumption plan, instances of the Functions host are dynamically added and removed based on the number of incoming events.<br/><br/> Γ£ö Default hosting plan.<br/>Γ£ö Pay only when your functions are running.<br/>Γ£ö Scales automatically, even during periods of high load.|
-|**[Premium plan](functions-premium-plan.md)**|Automatically scales based on demand using pre-warmed workers which run applications with no delay after being idle, runs on more powerful instances, and connects to virtual networks. <br/><br/>Consider the Azure Functions Premium plan in the following situations: <br/><br/>Γ£ö Your function apps run continuously, or nearly continuously.<br/>Γ£ö You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.<br/>Γ£ö You need more CPU or memory options than what is provided by the Consumption plan.<br/>Γ£ö Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>Γ£ö You require features that aren't available on the Consumption plan, such as virtual network connectivity.|
-|**[Dedicated plan](dedicated-plan.md)** |Run your functions within an App Service plan at regular [App Service plan rates](https://azure.microsoft.com/pricing/details/app-service/windows/).<br/><br/>Best for long-running scenarios where [Durable Functions](durable/durable-functions-overview.md) can't be used. Consider an App Service plan in the following situations:<br/><br/>Γ£ö You have existing, underutilized VMs that are already running other App Service instances.<br/>Γ£ö You want to provide a custom image on which to run your functions. <br/>Γ£ö Predictive scaling and costs are required.|
+|**[Premium plan](functions-premium-plan.md)**|Automatically scales based on demand using pre-warmed workers which run applications with no delay after being idle, runs on more powerful instances, and connects to virtual networks. <br/><br/>Consider the Azure Functions Premium plan in the following situations: <br/><br/>Γ£ö Your function apps run continuously, or nearly continuously.<br/>Γ£ö You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.<br/>Γ£ö You need more CPU or memory options than what is provided by the Consumption plan.<br/>Γ£ö Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>Γ£ö You require features that aren't available on the Consumption plan, such as virtual network connectivity.<br/>Γ£ö You want to provide a custom Linux image on which to run your functions. |
+|**[Dedicated plan](dedicated-plan.md)** |Run your functions within an App Service plan at regular [App Service plan rates](https://azure.microsoft.com/pricing/details/app-service/windows/).<br/><br/>Best for long-running scenarios where [Durable Functions](durable/durable-functions-overview.md) can't be used. Consider an App Service plan in the following situations:<br/><br/>Γ£ö You have existing, underutilized VMs that are already running other App Service instances.<br/>Γ£ö Predictive scaling and costs are required.|
The comparison tables in this article also include the following hosting options, which provide the highest amount of control and isolation in which to run your function apps.
azure-monitor Resource Manager Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/resource-manager-cluster.md
+
+ Title: Resource Manager template samples for Log Analytics clusters
+description: Sample Azure Resource Manager templates to deploy Log Analytics clusters.
+++ Last updated : 09/12/2021+++
+# Resource Manager template samples for Log Analytics clusters in Azure Monitor
+This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure Log Analytics clusters in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template.
+++
+## Template references
+
+- [Microsoft.OperationalInsights clusters](/azure/templates/microsoft.operationalinsights/2020-03-01-preview/clusters)
+
+## Create a Log Analytics cluster
+The following sample creates a new empty Log Analytics cluster.
+
+### Template file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "metadata": {
+ "description": "The name of the Log Analytics cluster."
+ }
+ },
+ "CommitmentTier": {
+ "type": "int",
+ "allowedValues": [
+ 500,
+ 1000,
+ 2000,
+ 5000
+ ],
+ "defaultValue": 500,
+ "metadata": {
+ "description": "The Capacity Reservation value."
+ }
+ },
+ "billingType": {
+ "type": "string",
+ "allowedValues": [
+ "Cluster",
+ "Workspaces"
+ ],
+ "defaultValue": "Cluster",
+ "metadata": {
+ "description": "The billing type settings. Can be 'Cluster' (default) or 'Workspaces' for proportional billing on workspaces."
+ }
+ }
+ },
+ "resources": [
+ {
+ "name": "[parameters('clusterName')]",
+ "type": "Microsoft.OperationalInsights/clusters",
+ "apiVersion": "2020-08-01",
+ "location": "[resourceGroup().location]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "sku": {
+ "name": "CapacityReservation",
+ "capacity": "[parameters('CommitmentTier')]"
+ },
+ "properties": {
+ "billingType": "[parameters('billingType')]"
+ }
+ }
+ ]
+}
+```
+
+### Parameter file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "CommitmentTier": {
+ "value": 500
+ },
+ "billingType": {
+ "value": "Cluster"
+ }
+ }
+}
+```
+
+## Update a Log Analytics cluster
+The following sample updates a Log Analytics cluster to use customer-managed key.
+
+### Template file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "metadata": {
+ "description": "The name of the Log Analytics cluster."
+ }
+ },
+ "keyVaultUri": {
+ "type": "string",
+ "metadata": {
+ "description": "The key identifier URI."
+ }
+ },
+ "keyName": {
+ "type": "string",
+ "metadata": {
+ "description": "The key name."
+ }
+ },
+ "keyVersion": {
+ "type": "string",
+ "metadata": {
+ "description": "The key version. When empty, latest key version is used."
+ }
+ }
+ },
+ "resources": [
+ {
+ "name": "[parameters('clusterName')]",
+ "type": "Microsoft.OperationalInsights/clusters",
+ "apiVersion": "2020-08-01",
+ "location": "[resourceGroup().location]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "keyVaultProperties": {
+ "keyVaultUri": "https://key-vault-name.vault.azure.net",
+ "keyName": "key-name",
+ "keyVersion": "current-version"
+ }
+ }
+ }
+ ]
+}
+```
+
+### Parameter file
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "keyVaultUri": {
+ "value": "https://key-vault-name.vault.azure.net"
+ },
+ "keyName": {
+ "value": "MyKeyName"
+ },
+ "keyVersion": {
+ "value": ""
+ }
+ }
+}
+```
+
+## Next steps
+
+* [Get other sample templates for Azure Monitor](../resource-manager-samples.md).
+* [Learn more about Log Analytics dedicated clusters](./logs-dedicated-clusters.md).
+* [Learn more about agent data sources](../agents/agent-data-sources.md).
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/data-types.md
Title: Data types in Bicep description: Describes the data types that are available in Bicep Previously updated : 08/30/2021 Last updated : 09/10/2021 # Data types in Bicep
var mixedArray = [
] ```
-Arrays in Bicep are 0-based. In the following example, the expression `exampleArray[0]` evaluates to 1 and `exampleArray[2]` evaluates to 3. The index of the indexer may itself be another expression. The expression `exampleArray[index]` evaluates to 2. Integer indexers are only allowed on expression of array types.
+Arrays in Bicep are zero-based. In the following example, the expression `exampleArray[0]` evaluates to 1 and `exampleArray[2]` evaluates to 3. The index of the indexer may itself be another expression. The expression `exampleArray[index]` evaluates to 2. Integer indexers are only allowed on expression of array types.
```bicep var index = 1
When specifying integer values, don't use quotation marks.
param exampleInt int = 1 ```
-For integers passed as inline parameters, the range of values may be limited by the SDK or command-line tool you use for deployment. For example, when using PowerShell to deploy a Bicep, integer types can range from -2147483648 to 2147483647. To avoid this limitation, specify large integer values in a [parameter file](parameter-files.md). Resource types apply their own limits for integer properties.
+In Bicep, integers are 64-bit integers. When passed as inline parameters, the range of values may be limited by the SDK or command-line tool you use for deployment. For example, when using PowerShell to deploy a Bicep, integer types can range from -2147483648 to 2147483647. To avoid this limitation, specify large integer values in a [parameter file](parameter-files.md). Resource types apply their own limits for integer properties.
Floating point, decimal or binary formats aren't currently supported.
output accessorResult string = environmentSettings['dev'].name
## Strings
-In Bicep, strings are marked with singled quotes, and must be declared on a single line. All Unicode characters with codepoints between *0* and *10FFFF* are allowed.
+In Bicep, strings are marked with singled quotes, and must be declared on a single line. All Unicode characters with code points between *0* and *10FFFF* are allowed.
```bicep param exampleString string = 'test value'
The following table lists the set of reserved characters that must be escaped by
| Escape Sequence | Represented value | Notes | |:-|:-|:-|
-| \\ | \ ||
-| \' | ' ||
-| \n | line feed (LF) ||
-| \r | carriage return (CR) ||
-| \t | tab character ||
-| \u{x} | Unicode code point *x* | *x* represents a hexadecimal codepoint value between *0* and *10FFFF* (both inclusive). Leading zeros are allowed. Codepoints above *FFFF* are emitted as a surrogate pair.
-| \$ | $ | Only needs to be escaped if it's followed by *{*. |
+| `\\` | `\` ||
+| `\'` | `'` ||
+| `\n` | line feed (LF) ||
+| `\r` | carriage return (CR) ||
+| `\t` | tab character ||
+| `\u{x}` | Unicode code point `x` | **x** represents a hexadecimal code point value between *0* and *10FFFF* (both inclusive). Leading zeros are allowed. Code points above *FFFF* are emitted as a surrogate pair.
+| `\$` | `$` | Only escape when followed by `{`. |
```bicep // evaluates to "what's up?" var myVar = 'what\'s up?' ```
-All strings in Bicep support interpolation. To inject an expression, surround it by *${* and *}`. Expressions that are referenced can't span multiple lines.
+All strings in Bicep support interpolation. To inject an expression, surround it by `${` and `}`. Expressions that are referenced can't span multiple lines.
```bicep var storageName = 'storage${uniqueString(resourceGroup().id)}
var storageName = 'storage${uniqueString(resourceGroup().id)}
## Multi-line strings
-In Bicep, multi-line strings are defined between 3 single quote characters (`'''`) followed optionally by a newline (the opening sequence), and 3 single quote characters (`'''` - the closing sequence). Characters that are entered between the opening and closing sequence are read verbatim, and no escaping is necessary or possible.
+In Bicep, multi-line strings are defined between three single quote characters (`'''`) followed optionally by a newline (the opening sequence), and three single quote characters (`'''` - the closing sequence). Characters that are entered between the opening and closing sequence are read verbatim, and no escaping is necessary or possible.
> [!NOTE] > Because the Bicep parser reads all characters as is, depending on the line endings of your Bicep file, newlines can be interpreted as either `\r\n` or `\n`.
azure-resource-manager Operators Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/operators-access.md
description: Describes Bicep resource access operator and property access operat
Previously updated : 08/30/2021 Last updated : 09/10/2021 # Bicep accessor operators
The accessor operators are used to access child resources, properties on objects
## Index accessor
-`array[index]`
+`array[integerIndex]`
-`object['index']`
+`object['stringIndex']`
-To get an element in an array, use `[index]` and provide an integer for the index.
+Use the index accessor to get either an element from an array or a property from an object.
+
+For an **array**, provide the index as an **integer**. The integer matches the zero-based position of the element to retrieve.
+
+For an **object**, provide the index as a **string**. The string matches the name of the object to retrieve.
The following example gets an element in an array.
Output from the example:
| - | - | - | | accessorResult | string | 'Contoso' |
-You can also use the index accessor to get an object property by name. You must use a string for the index, not an integer. The following example gets a property on an object.
+The next example gets a property on an object.
```bicep var environmentSettings = {
azure-resource-manager Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/operators.md
description: Describes the Bicep operators available for Azure Resource Manager
Previously updated : 08/30/2021 Last updated : 09/10/2021 # Bicep operators
The operators below are listed in descending order of precedence (the higher the
| `?` `:` | Conditional expression (ternary) | Right to left | `??` | Coalesce | Left to right
-Enclosing an expression between `(` and `)` allows you to override the default Bicep operator precedence. For example, the expression x + y / z evaluates the division first and then the addition. However, the expression (x + y) / z evaluates the addition first and division second.
+## Parentheses
+
+Enclosing an expression between parentheses allows you to override the default Bicep operator precedence. For example, the expression `x + y / z` evaluates the division first and then the addition. However, the expression `(x + y) / z` evaluates the addition first and division second.
## Accessor
The logical operators evaluate boolean values, return non-null values, or evalua
| - | - | - | | `&&` | [And](./operators-logical.md#and-) | Returns `true` if all values are true. | | `||`| [Or](./operators-logical.md#or-) | Returns `true` if either value is true. |
-| `!` | [Not](./operators-logical.md#not-) | Negates a boolean value. |
+| `!` | [Not](./operators-logical.md#not-) | Negates a boolean value. Takes one operand. |
| `??` | [Coalesce](./operators-logical.md#coalesce-) | Returns the first non-null value. | | `?` `:` | [Conditional expression](./operators-logical.md#conditional-expression--) | Evaluates a condition for true or false and returns a value. |
The numeric operators use integers to do calculations and return integer values.
| `/` | [Divide](./operators-numeric.md#divide-) | Divides an integer by an integer. | | `%` | [Modulo](./operators-numeric.md#modulo-) | Divides an integer by an integer and returns the remainder. | | `+` | [Add](./operators-numeric.md#add-) | Adds two integers. |
-| `-` | [Subtract](./operators-numeric.md#subtract--) | Subtracts an integer from an integer. |
-| `-` | [Minus](./operators-numeric.md#minus--) | Multiplies an integer by `-1`. |
+| `-` | [Subtract](./operators-numeric.md#subtract--) | Subtracts one integer from another integer. Takes two operands. |
+| `-` | [Minus](./operators-numeric.md#minus--) (unary) | Multiplies an integer by `-1`. Takes one operand. |
> [!NOTE] > Subtract and minus use the same operator. The functionality is different because subtract uses two
azure-signalr Server Graceful Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/server-graceful-shutdown.md
description: This article provides information about gracefully shutdown SignalR
Previously updated : 11/12/2020 Last updated : 08/16/2021
In general, there will be four stages in a graceful shutdown process:
Azure SignalR Service will try to reroute the client connection on this server to another valid server.
- In this scenario, `OnConnectedAsync` and `OnDisconnectedAsync` will be triggered on the new server and the old server respectively with an `IConnectionMigrationFeature` set in the `HttpContext`, which can be used to identify if the client connection was being migrated-in our migrated-out. It could be useful especially for stateful scenarios.
+ In this scenario, `OnConnectedAsync` and `OnDisconnectedAsync` will be triggered on the new server and the old server respectively with an `IConnectionMigrationFeature` set in the `Context`, which can be used to identify if the client connection was being migrated-in our migrated-out. It could be useful especially for stateful scenarios.
The client connection will be immediately migrated after the current message has been delivered, which means the next message will be routed to the new server.
public class Chat : Hub {
{ Console.WriteLine($"{Context.ConnectionId} connected.");
- var feature = Context.GetHttpContext().Features.Get<IConnectionMigrationFeature>();
+ var feature = Context.Features.Get<IConnectionMigrationFeature>();
if (feature != null) { Console.WriteLine($"[{feature.MigrateTo}] {Context.ConnectionId} is migrated from {feature.MigrateFrom}.");
public class Chat : Hub {
{ Console.WriteLine($"{Context.ConnectionId} disconnected.");
- var feature = Context.GetHttpContext().Features.Get<IConnectionMigrationFeature>();
+ var feature = Context.Features.Get<IConnectionMigrationFeature>();
if (feature != null) { Console.WriteLine($"[{feature.MigrateFrom}] {Context.ConnectionId} will be migrated to {feature.MigrateTo}.");
public class Chat : Hub {
await base.OnDisconnectedAsync(e); } }
-```
+```
azure-sql Dynamic Data Masking Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/dynamic-data-masking-overview.md
Previously updated : 06/24/2021 Last updated : 09/12/2021 tags: azure-synpase # Dynamic data masking
You set up a dynamic data masking policy in the Azure portal by selecting the **
The DDM recommendations engine, flags certain fields from your database as potentially sensitive fields, which may be good candidates for masking. In the Dynamic Data Masking blade in the portal, you will see the recommended columns for your database. All you need to do is click **Add Mask** for one or more columns and then **Save** to apply a mask for these fields.
+## Manage dynamic data masking using T-SQL
+
+- To create a dynamic data mask, see [Creating a Dynamic Data Mask](/sql/relational-databases/security/dynamic-data-masking#creating-a-dynamic-data-mask).
+- To add or edit a mask on an existing column, see [Adding or Editing a Mask on an Existing Column](/sql/relational-databases/security/dynamic-data-masking#adding-or-editing-a-mask-on-an-existing-column).
+- To grant permissions to view unmasked data, see [Granting Permissions to View Unmasked Data](/sql/relational-databases/security/dynamic-data-masking#granting-permissions-to-view-unmasked-data).
+- To drop a dynamic data mask, see [Dropping a Dynamic Data Mask](/sql/relational-databases/security/dynamic-data-masking#dropping-a-dynamic-data-mask).
+ ## Set up dynamic data masking for your database using PowerShell cmdlets ### Data masking policies
You can use the REST API to programmatically manage data masking policy and rule
## Permissions
-Dynamic data masking can be configured by the Azure SQL Database admin, server admin, or the role-based access control (RBAC) [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) role.
+These are the built-in roles to configure dynamic data masking is:
+- [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager)
+- [SQL DB Contributor](../../role-based-access-control/built-in-roles.md#sql-db-contributor)
+- [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor)
+
+These are the required actions to use dynamic data masking:
+
+Read/Write:
+- Microsoft.Sql/servers/databases/dataMaskingPolicies/*
+Read:
+- Microsoft.Sql/servers/databases/dataMaskingPolicies/read
+Write:
+- Microsoft.Sql/servers/databases/dataMaskingPolicies/write
+
+To learn more about permissions when using dynamic data masking with T-SQL command, see [Permissions](/sql/relational-databases/security/dynamic-data-masking#permissions)
## See also
azure-video-analyzer Analyze Live Video Use Your Model Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/analyze-live-video-use-your-model-http.md
Title: Analyze live video with your own model - HTTP
description: This quickstart describes how to analyze live video with your own model (HTTP) with Video Analyzer. Previously updated : 06/01/2021 Last updated : 09/10/2021 zone_pivot_groups: video-analyzer-programming-languages
batch Batch Get Resource Counts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-get-resource-counts.md
Title: Count states for tasks and nodes description: Count the state of Azure Batch tasks and compute nodes to help manage and monitor Batch solutions. Previously updated : 06/18/2020 Last updated : 09/10/2021
To monitor and manage large-scale Azure Batch solutions, you may need to determine counts of resources in various states. Azure Batch provides efficient operations to get counts for Batch tasks and compute nodes. You can use these operations instead of potentially time-consuming list queries that return detailed information about large collections of tasks or nodes. -- [Get Task Counts](/rest/api/batchservice/job/gettaskcounts) gets an aggregate count of active, running, and completed tasks in a job, and of tasks that succeeded or failed.
+- [Get Task Counts](/rest/api/batchservice/job/gettaskcounts) gets an aggregate count of active, running, and completed tasks in a job, and of tasks that succeeded or failed. By counting tasks in each state, you can more easily display job progress to a user, or detect unexpected delays or failures that may affect the job.
- By counting tasks in each state, you can more easily display job progress to a user, or detect unexpected delays or failures that may affect the job. Get Task Counts is available as of Batch Service API version 2017-06-01.5.1 and related SDKs and tools.
--- [List Pool Node Counts](/rest/api/batchservice/account/listpoolnodecounts) gets the number of dedicated and low-priority compute nodes in each pool that are in various states: creating, idle, offline, preempted, rebooting, reimaging, starting, and others.-
- By counting nodes in each state, you can determine when you have adequate compute resources to run your jobs, and identify potential issues with your pools. List Pool Node Counts is available as of Batch Service API version 2018-03-01.6.1 and related SDKs and tools.
+- [List Pool Node Counts](/rest/api/batchservice/account/listpoolnodecounts) gets the number of dedicated and low-priority compute nodes in each pool that are in various states: creating, idle, offline, preempted, rebooting, reimaging, starting, and others. By counting nodes in each state, you can determine when you have adequate compute resources to run your jobs, and identify potential issues with your pools.
Note that at times, the numbers returned by these operations may not be up to date. If you need to be sure that a count is accurate, use a list query to count these resources. List queries also let you get information about other Batch resources such as applications. For more information about applying filters to list queries, see [Create queries to list Batch resources efficiently](batch-efficient-list-queries.md).
Note that at times, the numbers returned by these operations may not be up to da
The Get Task Counts operation counts tasks by the following states: -- **Active** - A task that is queued and able to run, but is not currently assigned to a compute node. A task is also `active` if it is [dependent on a parent task](batch-task-dependencies.md) that has not yet completed. -- **Running** - A task that has been assigned to a compute node, but has not yet completed. A task is counted as `running` when its state is either `preparing` or `running`, as indicated by the [Get information about a task](/rest/api/batchservice/task/get) operation.-- **Completed** - A task that is no longer eligible to run, because it either finished successfully, or finished unsuccessfully and also exhausted its retry limit. -- **Succeeded** - A task whose result of task execution is `success`. Batch determines whether a task has succeeded or failed by checking the `TaskExecutionResult` property of the [executionInfo](/rest/api/batchservice/task/get) property.-- **Failed** A task whose result of task execution is `failure`.
+- **Active**: A task that is queued and able to run, but is not currently assigned to a compute node. A task is also `active` if it is [dependent on a parent task](batch-task-dependencies.md) that has not yet completed.
+- **Running**: A task that has been assigned to a compute node, but has not yet completed. A task is counted as `running` when its state is either `preparing` or `running`, as indicated by the [Get information about a task](/rest/api/batchservice/task/get) operation.
+- **Completed**: A task that is no longer eligible to run, because it either finished successfully, or finished unsuccessfully and also exhausted its retry limit.
+- **Succeeded**: A task where the result of task execution is `success`. Batch determines whether a task has succeeded or failed by checking the `TaskExecutionResult` property of the [executionInfo](/rest/api/batchservice/task/get) property.
+- **Failed**: A task where the result of task execution is `failure`.
-The following .NET code sample shows how to retrieve task counts by state:
+The following .NET code sample shows how to retrieve task counts by state.
```csharp var taskCounts = await batchClient.JobOperations.GetJobTaskCountsAsync("job-1");
Console.WriteLine("Succeeded task count: {0}", taskCounts.Succeeded);
Console.WriteLine("Failed task count: {0}", taskCounts.Failed); ```
-You can use a similar pattern for REST and other supported languages to get task counts for a job.
-
-> [!NOTE]
-> Batch Service API versions before 2018-08-01.7.0 also return a `validationStatus` property in the Get Task Counts response. This property indicates whether Batch checked the state counts for consistency with the states reported in the List Tasks API. A value of `validated` indicates only that Batch checked for consistency at least once for the job. The value of the `validationStatus` property does not indicate whether the counts that Get Task Counts returns are currently up to date.
+You can use a similar pattern for REST and other supported languages to get task counts for a job.
## Node state counts The List Pool Node Counts operation counts compute nodes by the following states in each pool. Separate aggregate counts are provided for dedicated nodes and low-priority nodes in each pool. -- **Creating** - An Azure-allocated VM that has not yet started to join a pool.-- **Idle** - An available compute node that is not currently running a task.-- **LeavingPool** - A node that is leaving the pool, either because the user explicitly removed it or because the pool is resizing or autoscaling down.-- **Offline** - A node that Batch cannot use to schedule new tasks.-- **Preempted** - A low-priority node that was removed from the pool because Azure reclaimed the VM. A `preempted` node can be reinitialized when replacement low-priority VM capacity is available.-- **Rebooting** - A node that is restarting.-- **Reimaging** - A node on which the operating system is being reinstalled.-- **Running** - A node that is running one or more tasks (other than the start task).-- **Starting** - A node on which the Batch service is starting. -- **StartTaskFailed** - A node on which the [start task](/rest/api/batchservice/pool/add#starttask) failed and exhausted all retries, and on which `waitForSuccess` is set on the start task. The node is not usable for running tasks.-- **Unknown** - A node that lost contact with the Batch service and whose state isn't known.-- **Unusable** - A node that can't be used for task execution because of errors.-- **WaitingForStartTask** - A node on which the start task started running, but `waitForSuccess` is set and the start task has not completed.
+- **Creating**: An Azure-allocated VM that has not yet started to join a pool.
+- **Idle**: An available compute node that is not currently running a task.
+- **LeavingPool**: A node that is leaving the pool, either because the user explicitly removed it or because the pool is resizing or autoscaling down.
+- **Offline**: A node that Batch cannot use to schedule new tasks.
+- **Preempted**: A low-priority node that was removed from the pool because Azure reclaimed the VM. A `preempted` node can be reinitialized when replacement low-priority VM capacity is available.
+- **Rebooting**: A node that is restarting.
+- **Reimaging**: A node on which the operating system is being reinstalled.
+- **Running** : A node that is running one or more tasks (other than the start task).
+- **Starting**: A node on which the Batch service is starting.
+- **StartTaskFailed**: A node on which the [start task](/rest/api/batchservice/pool/add#starttask) failed and exhausted all retries, and on which `waitForSuccess` is set on the start task. The node is not usable for running tasks.
+- **Unknown**: A node that lost contact with the Batch service and whose state isn't known.
+- **Unusable**: A node that can't be used for task execution because of errors.
+- **WaitingForStartTask**: A node on which the start task started running, but `waitForSuccess` is set and the start task has not completed.
The following C# snippet shows how to list node counts for all pools in the current account:
batch Batch Job Prep Release https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-job-prep-release.md
Title: Create tasks to prepare & complete jobs on compute nodes description: Use job-level preparation tasks to minimize data transfer to Azure Batch compute nodes, and release tasks for node cleanup at job completion. Previously updated : 02/17/2020 Last updated : 09/10/2021 # Run job preparation and job release tasks on Batch compute nodes
- An Azure Batch job often requires some form of setup before its tasks are executed, and post-job maintenance when its tasks are completed. You might need to download common task input data to your compute nodes, or upload task output data to Azure Storage after the job completes. You can use **job preparation** and **job release** tasks to perform these operations.
+ An Azure Batch job often requires some form of setup before its tasks are executed. It also may require post-job maintenance when its tasks are completed. For example, you might need to download common task input data to your compute nodes, or upload task output data to Azure Storage after the job completes. You can use **job preparation** and **job release** tasks to perform these operations.
## What are job preparation and release tasks?+ Before a job's tasks run, the job preparation task runs on all compute nodes scheduled to run at least one task. Once the job is completed, the job release task runs on each node in the pool that executed at least one task. As with normal Batch tasks, you can specify a command line to be invoked when a job preparation or release task is run.
-Job preparation and release tasks offer familiar Batch task features such as file download ([resource files][net_job_prep_resourcefiles]), elevated execution, custom environment variables, maximum execution duration, retry count, and file retention time.
+Job preparation and release tasks offer familiar Batch task features such as file download ([resource files](/dotnet/api/microsoft.azure.batch.jobpreparationtask.resourcefiles)), elevated execution, custom environment variables, maximum execution duration, retry count, and file retention time.
-In the following sections, you'll learn how to use the [JobPreparationTask][net_job_prep] and [JobReleaseTask][net_job_release] classes found in the [Batch .NET][api_net] library.
+In the following sections, you'll learn how to use the [JobPreparationTask](/dotnet/api/microsoft.azure.batch.jobpreparationtask) and [JobReleaseTask](/dotnet/api/microsoft.azure.batch.jobreleasetask) classes found in the [Batch .NET](/dotnet/api/microsoft.azure.batch) library.
> [!TIP] > Job preparation and release tasks are especially helpful in "shared pool" environments, in which a pool of compute nodes persists between job runs and is used by many jobs.
->
->
## When to use job preparation and release tasks
-Job preparation and job release tasks are a good fit for the following situations:
-
-**Download common task data**
-
-Batch jobs often require a common set of data as input for the job's tasks. For example, in daily risk analysis calculations, market data is job-specific, yet common to all tasks in the job. This market data, often several gigabytes in size, should be downloaded to each compute node only once so that any task that runs on the node can use it. Use a **job preparation task** to download this data to each node before the execution of the job's other tasks.
-**Delete job and task output**
-
-In a "shared pool" environment, where a pool's compute nodes are not decommissioned between jobs, you may need to delete job data between runs. You might need to conserve disk space on the nodes, or satisfy your organization's security policies. Use a **job release task** to delete data that was downloaded by a job preparation task, or generated during task execution.
+Job preparation and job release tasks are a good fit for the following situations:
-**Log retention**
+- **Downloading common task data**: Batch jobs often require a common set of data as input for the job's tasks. For example, in daily risk analysis calculations, market data is job-specific, yet common to all tasks in the job. This market data, often several gigabytes in size, should be downloaded to each compute node only once so that any task that runs on the node can use it. Use a **job preparation task** to download this data to each node before the execution of the job's other tasks.
-You might want to keep a copy of log files that your tasks generate, or perhaps crash dump files that can be generated by failed applications. Use a **job release task** in such cases to compress and upload this data to an [Azure Storage][azure_storage] account.
+- **Job and task output deletion**: In a "shared pool" environment, where a pool's compute nodes are not decommissioned between jobs, you may need to delete job data between runs. You might need to conserve disk space on the nodes, or satisfy your organization's security policies. Use a **job release task** to delete data that was downloaded by a job preparation task, or that was generated during task execution.
-> [!TIP]
-> Another way to persist logs and other job and task output data is to use the [Azure Batch File Conventions](batch-task-output.md) library.
->
->
+- **Log retention**: You might want to keep a copy of log files that your tasks generate, or perhaps crash dump files that can be generated by failed applications. Use a **job release task** in such cases to compress and upload this data to an [Azure Storage account](accounts.md#azure-storage-accounts).
## Job preparation task
+Before executing tasks in a job, Batch runs the job preparation task on each compute node scheduled to run a task. By default, Batch waits for the job preparation task to complete before running the tasks scheduled to execute on the node. However, you can configure the service not to wait. If the node restarts, the job preparation task runs again. You can also disable this behavior. If you have a job with a job preparation task and a job manager task configured, the job preparation task runs before the job manager task, just as it does for all other tasks. The job preparation task always runs first.
-Before execution of a job's tasks, Batch executes the job preparation task on each compute node scheduled to run a task. By default, Batch waits for the job preparation task to complete before running the tasks scheduled to execute on the node. However, you can configure the service not to wait. If the node restarts, the job preparation task runs again. You can also disable this behavior. If you have a job with a job preparation task and a job manager task configured, the job preparation task runs before the job manager task, just as it does for all other tasks. The job preparation task always runs first.
-
-The job preparation task is executed only on nodes that are scheduled to run a task. This prevents the unnecessary execution of a preparation task in case a node is not assigned a task. This can occur when the number of tasks for a job is less than the number of nodes in a pool. It also applies when [concurrent task execution](batch-parallel-node-tasks.md) is enabled, which leaves some nodes idle if the task count is lower than the total possible concurrent tasks. By not running the job preparation task on idle nodes, you can spend less money on data transfer charges.
+The job preparation task is executed only on nodes that are scheduled to run a task. This prevents the unnecessary execution of a preparation task in case a node is not assigned any tasks. This can occur when the number of tasks for a job is less than the number of nodes in a pool. It also applies when [concurrent task execution](batch-parallel-node-tasks.md) is enabled, which leaves some nodes idle if the task count is lower than the total possible concurrent tasks.
> [!NOTE]
-> [JobPreparationTask][net_job_prep_cloudjob] differs from [CloudPool.StartTask][pool_starttask] in that JobPreparationTask executes at the start of each job, whereas StartTask executes only when a compute node first joins a pool or restarts.
->
-
+> [JobPreparationTask](/dotnet/api/microsoft.azure.batch.cloudjob.jobpreparationtask) differs from [CloudPool.StartTask](/dotnet/api/microsoft.azure.batch.cloudpool.starttask) in that JobPreparationTask executes at the start of each job, whereas StartTask executes only when a compute node first joins a pool or restarts.
>## Job release task
-Once a job is marked as completed, the job release task is executed on each node in the pool that executed at least one task. You mark a job as completed by issuing a terminate request. The Batch service then sets the job state to *terminating*, terminates any active or running tasks associated with the job, and runs the job release task. The job then moves to the *completed* state.
+Once a job is marked as completed, the job release task runs on each node in the pool that executed at least one task. You mark a job as completed by issuing a terminate request. This request sets the job state to *terminating*, terminates any active or running tasks associated with the job, and runs the job release task. The job then moves to the *completed* state.
> [!NOTE]
-> Job deletion also executes the job release task. However, if a job has already been terminated, the release task is not run a second time if the job is later deleted.
+> Deleting a job also executes the job release task. However, if a job has already been terminated, the release task is not run a second time if the job is later deleted.
Jobs release tasks can run for a maximum of 15 minutes before being terminated by the Batch service. For more information, see the [REST API reference documentation](/rest/api/batchservice/job/add#jobreleasetask).
->
->
## Job prep and release tasks with Batch .NET
-To use a job preparation task, assign a [JobPreparationTask][net_job_prep] object to your job's [CloudJob.JobPreparationTask][net_job_prep_cloudjob] property. Similarly, initialize a [JobReleaseTask][net_job_release] and assign it to your job's [CloudJob.JobReleaseTask][net_job_prep_cloudjob] property to set the job's release task.
-In this code snippet, `myBatchClient` is an instance of [BatchClient][net_batch_client], and `myPool` is an existing pool within the Batch account.
+To use a job preparation task, assign a [JobPreparationTask](/dotnet/api/microsoft.azure.batch.jobpreparationtask) object to your job's [CloudJob.JobPreparationTask](/dotnet/api/microsoft.azure.batch.cloudjob.jobpreparationtask) property. Similarly, to use a job release task, initialize a [JobReleaseTask](/dotnet/api/microsoft.azure.batch.jobreleasetask) and assign it to your job's [CloudJob.JobReleaseTask](/dotnet/api/microsoft.azure.batch.cloudjob.jobreleasetask).
+
+In this code snippet, `myBatchClient` is an instance of [BatchClient](/dotnet/api/microsoft.azure.batch.batchclient), and `myPool` is an existing pool within the Batch account.
```csharp // Create the CloudJob for CloudPool "myPool"
myJob.JobReleaseTask =
await myJob.CommitAsync(); ```
-As mentioned earlier, the release task is executed when a job is terminated or deleted. Terminate a job with [JobOperations.TerminateJobAsync][net_job_terminate]. Delete a job with [JobOperations.DeleteJobAsync][net_job_delete]. You typically terminate or delete a job when its tasks are completed, or when a timeout that you've defined has been reached.
+As mentioned earlier, the release task is executed when a job is terminated or deleted. Terminate a job with [JobOperations.TerminateJobAsync](/dotnet/api/microsoft.azure.batch.joboperations.terminatejobasync). Delete a job with [JobOperations.DeleteJobAsync](/dotnet/api/microsoft.azure.batch.joboperations.deletejobasync). You typically terminate or delete a job when its tasks are completed, or when a timeout that you've defined has been reached.
```csharp
-// Terminate the job to mark it as Completed; this will initiate the
-// Job Release Task on any node that executed job tasks. Note that the
-// Job Release Task is also executed when a job is deleted, thus you
-// need not call Terminate if you typically delete jobs after task completion.
+// Terminate the job to mark it as completed; this will initiate the
+// job release task on any node that executed job tasks. Note that the
+// job release task is also executed when a job is deleted, so you don't
+// have to call Terminate if you delete jobs after task completion.
+ await myBatchClient.JobOperations.TerminateJobAsync("JobPrepReleaseSampleJob"); ``` ## Code sample on GitHub
-To see job preparation and release tasks in action, check out the [JobPrepRelease][job_prep_release_sample] sample project on GitHub. This console application does the following:
+
+To see job preparation and release tasks in action, check out the [JobPrepRelease](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp/ArticleProjects/JobPrepRelease) sample project on GitHub. This console application does the following:
1. Creates a pool with two nodes.
-2. Creates a job with job preparation, release, and standard tasks.
-3. Runs the job preparation task, which first writes the node ID to a text file in a node's "shared" directory.
-4. Runs a task on each node that writes its task ID to the same text file.
-5. Once all tasks are completed (or the timeout is reached), prints the contents of each node's text file to the console.
-6. When the job is completed, runs the job release task to delete the file from the node.
-7. Prints the exit codes of the job preparation and release tasks for each node on which they executed.
-8. Pauses execution to allow confirmation of job and/or pool deletion.
+1. Creates a job with job preparation, release, and standard tasks.
+1. Runs the job preparation task, which first writes the node ID to a text file in a node's "shared" directory.
+1. Runs a task on each node that writes its task ID to the same text file.
+1. Once all tasks are completed (or the timeout is reached), prints the contents of each node's text file to the console.
+1. When the job is completed, runs the job release task to delete the file from the node.
+1. Prints the exit codes of the job preparation and release tasks for each node on which they executed.
+1. Pauses execution to allow confirmation of job and/or pool deletion.
Output from the sample application is similar to the following:
Sample complete, hit ENTER to exit...
> [!NOTE] > Due to the variable creation and start time of nodes in a new pool (some nodes are ready for tasks before others), you may see different output. Specifically, because the tasks complete quickly, one of the pool's nodes may execute all of the job's tasks. If this occurs, you will notice that the job prep and release tasks do not exist for the node that executed no tasks.
->
->
### Inspect job preparation and release tasks in the Azure portal
-When you run the sample application, you can use the [Azure portal][portal] to view the properties of the job and its tasks, or even download the shared text file that is modified by the job's tasks.
-The screenshot below shows the **Preparation tasks blade** in the Azure portal after a run of the sample application. Navigate to the *JobPrepReleaseSampleJob* properties after your tasks have completed (but before deleting your job and pool) and click **Preparation tasks** or **Release tasks** to view their properties.
+You can use the [Azure portal](https://portal.azure.com) to view the properties of the job and its tasks. After you run the sample application, you can also download the shared text file that is modified by the job's tasks.
+
+The screenshot below shows the **Preparation tasks blade** in the Azure portal. Navigate to the *JobPrepReleaseSampleJob* properties after your tasks have completed (but before deleting your job and pool) and click **Preparation tasks** or **Release tasks** to view their properties.
-![Job preparation properties in Azure portal][1]
## Next steps
-### Application packages
-In addition to the job preparation task, you can also use the [application packages](batch-application-packages.md) feature of Batch to prepare compute nodes for task execution. This feature is especially useful for deploying applications that do not require running an installer, applications that contain many (100+) files, or applications that require strict version control.
-
-### Installing applications and staging data
-This MSDN forum post provides an overview of several methods of preparing your nodes for running tasks:
-
-[Installing applications and staging data on Batch compute nodes][forum_post]
-
-Written by one of the Azure Batch team members, it discusses several techniques that you can use to deploy applications and data to compute nodes.
-
-[api_net]: /dotnet/api/microsoft.azure.batch
-[api_net_listjobs]: /dotnet/api/microsoft.azure.batch.joboperations
-[api_rest]: /rest/api/batchservice/
-[azure_storage]: https://azure.microsoft.com/services/storage/
-[portal]: https://portal.azure.com
-[job_prep_release_sample]: https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/JobPrepRelease
-[forum_post]: https://social.msdn.microsoft.com/Forums/en-US/87b19671-1bdf-427a-972c-2af7e5ba82d9/installing-applications-and-staging-data-on-batch-compute-nodes?forum=azurebatch
-[net_batch_client]: /dotnet/api/microsoft.azure.batch.batchclient
-[net_cloudjob]:https://msdn.microsoft.com/library/azure/microsoft.azure.batch.cloudjob.aspx
-[net_job_prep]: /dotnet/api/microsoft.azure.batch.jobpreparationtask
-[net_job_prep_cloudjob]: /dotnet/api/microsoft.azure.batch.cloudjob
-[net_job_prep_resourcefiles]: /dotnet/api/microsoft.azure.batch.jobpreparationtask
-[net_job_delete]: /previous-versions/azure/mt281411(v=azure.100)
-[net_job_terminate]: /previous-versions/azure/mt188985(v=azure.100)
-[net_job_release]: /dotnet/api/microsoft.azure.batch.jobreleasetask
-[net_job_release_cloudjob]: /dotnet/api/microsoft.azure.batch.cloudjob
-[pool_starttask]: /dotnet/api/microsoft.azure.batch.cloudpool
-
-[net_list_certs]: /dotnet/api/microsoft.azure.batch.certificateoperations
-[net_list_compute_nodes]: /dotnet/api/microsoft.azure.batch.pooloperations
-[net_list_job_schedules]: /dotnet/api/microsoft.azure.batch.jobscheduleoperations
-[net_list_jobprep_status]: /dotnet/api/microsoft.azure.batch.joboperations
-[net_list_jobs]: /dotnet/api/microsoft.azure.batch.joboperations
-[net_list_nodefiles]: /dotnet/api/microsoft.azure.batch.joboperations
-[net_list_pools]: /dotnet/api/microsoft.azure.batch.pooloperations
-[net_list_schedule_jobs]: /dotnet/api/microsoft.azure.batch.jobscheduleoperations
-[net_list_task_files]: /dotnet/api/microsoft.azure.batch.cloudtask
-[net_list_tasks]: /dotnet/api/microsoft.azure.batch.joboperations
-
-[1]: ./media/batch-job-prep-release/portal-jobprep-01.png
+
+- Learn about [error checking for jobs and tasks](batch-job-task-error-checking.md).
+- Learn how to use [application packages](batch-application-packages.md) to prepare Batch compute nodes for task execution.
+- Explore different ways to [copy data and application to Batch compute nodes](batch-applications-to-pool-nodes.md).
+- Learn about using the [Azure Batch File Conventions library](batch-task-output.md#use-the-batch-file-conventions-library-for-net) to persist logs and other job and task output data.
batch Batch Job Task Error Checking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-job-task-error-checking.md
Title: Check for job and task errors
description: Learn about errors to check for and how to troubleshoot jobs and tasks. Previously updated : 11/23/2020 Last updated : 09/08/2021
A job is a grouping of one or more tasks, with the tasks actually specifying the
When adding a job, the following parameters can be specified which can influence how the job can fail: - [Job Constraints](/rest/api/batchservice/job/add#jobconstraints)
- - The `maxWallClockTime` property can optionally be specified to set the maximum amount of time a job can be active or running. If exceeded, the job will be terminated with the `terminateReason` property set in the [executionInfo](/rest/api/batchservice/job/get#cloudjob) for the job.
+ - The `maxWallClockTime` property can optionally be specified to set the maximum amount of time a job can be active or running. If exceeded, the job will be terminated with the `terminateReason` property set in the [executionInfo](/rest/api/batchservice/job/get#jobexecutioninformation) for the job.
- [Job Preparation Task](/rest/api/batchservice/job/add#jobpreparationtask) - If specified, a job preparation task is run the first time a task is run for a job on a node. The job preparation task can fail, which will lead to the task not being run and the job not completing. - [Job Release Task](/rest/api/batchservice/job/add#jobreleasetask)
The following job properties should be checked for errors:
### Job preparation tasks
-If a job preparation task is specified for a job, then an instance of that task will be run the first time a task for the job is run on a node. The job preparation task configured on the job can be thought of as a task template, with multiple job preparation task instances being run, up to the number of nodes in a pool.
+If a [job preparation task](batch-job-prep-release.md#job-preparation-task) is specified for a job, then an instance of that task will be run the first time a task for the job is run on a node. The job preparation task configured on the job can be thought of as a task template, with multiple job preparation task instances being run, up to the number of nodes in a pool.
The job preparation task instances should be checked to determine if there were errors:
The job preparation task instances should be checked to determine if there were
### Job release tasks
-If a job release task is specified for a job, then when a job is being terminated, an instance of the job release task is run on each pool node where a job preparation task was run. The job release task instances should be checked to determine if there were errors:
+If a [job release task](batch-job-prep-release.md#job-release-task) is specified for a job, then when a job is being terminated, an instance of the job release task is run on each pool node where a job preparation task was run. The job release task instances should be checked to determine if there were errors:
- All the instances of the job release task being run can be obtained from the job using the API [List Preparation and Release Task Status](/rest/api/batchservice/job/listpreparationandreleasetaskstatus). As with any task, there is [execution information](/rest/api/batchservice/job/listpreparationandreleasetaskstatus#jobpreparationandreleasetaskexecutioninformation) available with properties such as `failureInfo`, `exitCode`, and `result`. - If one or more job release tasks fail, then the job will still be terminated and move to a `completed` state.
On every file upload, Batch writes two log files to the compute node, `fileuploa
## Next steps - Check that your application implements comprehensive error checking; it can be critical to promptly detect and diagnose issues.-- Learn more about [jobs and tasks](jobs-and-tasks.md).
+- Learn more about [jobs and tasks](jobs-and-tasks.md) and [job preparation and release tasks](batch-job-prep-release.md).
batch High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/high-availability-disaster-recovery.md
Title: High availability and disaster recovery description: Learn how to design your Batch application for a regional outage. Previously updated : 12/30/2020 Last updated : 09/08/2021 # Design your Batch application for high availability
Consider the following points when designing a solution that can failover:
- Use templates and/or scripts to automate the deployment of the application in a region. - Keep application binaries and reference data up-to-date in all regions. Staying up-to-date will ensure the region can be brought online quickly without having to wait for the upload and deployment of files. For example, if a custom application to install on pool nodes is stored and referenced using Batch application packages, then when a new version of the application is produced, it should be uploaded to each Batch account and referenced by the pool configuration (or make the new version the default version). - In the application calling Batch, storage, and any other services, make it easy to switch over clients or the load to different regions.
+- When applicable, consider [creating pools across Availability Zones](create-pool-availability-zones.md).
- Consider frequently switching over to an alternate region as part of normal operation. For example, with two deployments in separate regions, switch over to the alternate region every month. ## Next steps
batch Pool Endpoint Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/pool-endpoint-configuration.md
Title: Configure node endpoints in Azure Batch pool description: How to configure or disable access to SSH or RDP ports on compute nodes in an Azure Batch pool. Previously updated : 02/13/2018 Last updated : 09/10/2021 # Configure or disable remote access to compute nodes in an Azure Batch pool
data-share Data Share Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/data-share-troubleshoot.md
Previously updated : 04/22/2021 Last updated : 09/10/2021 # Troubleshoot common problems in Azure Data Share
In some cases, when new users select **Accept Invitation** in an email invitatio
* **The invitation is already accepted.** The link in the email takes you to the **Data Share Invitations** page in the Azure portal. This page lists only pending invitations. Accepted invitations don't appear on the page. To view received shares and configure your target Azure Data Explorer cluster setting, go to the Data Share resource you used to accept the invitation.
+* **You are guest user of the tenant.** If you are a guest user of the tenant, you will need to verify your email address for the tenant prior to viewing the invitation. Once verified, it is valid for 12 months.
+ ## Creating and receiving shares The following errors might appear when you create a new share, add datasets, or map datasets:
data-share How To Share From Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-sql.md
Previously updated : 02/24/2021 Last updated : 09/10/2021 # Share and receive data from Azure SQL Database and Azure Synapse Analytics
Sign in to the [Azure portal](https://portal.azure.com/).
To open invitation from Azure portal directly, search for **Data Share Invitations** in Azure portal. This takes you to the list of Data Share invitations.
+ If you are a guest user of a tenant, you will be asked to verify your email address for the tenant prior to viewing Data Share invitation for the first time. Once verified, it is valid for 12 months.
+ ![List of Invitations](./media/invitations.png "List of invitations") 1. Select the share you would like to view.
data-share How To Share From Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-share-from-storage.md
Previously updated : 04/20/2021 Last updated : 09/10/2021 # Share and receive data from Azure Blob Storage and Azure Data Lake Storage
You can open an invitation from email or directly from the Azure portal.
To open an invitation from the Azure portal, search for *Data Share invitations*. You see a list of Data Share invitations.
+ If you are a guest user of a tenant, you will be asked to verify your email address for the tenant prior to viewing Data Share invitation for the first time. Once verified, it is valid for 12 months.
+ ![Screenshot showing the list of invitations in the Azure portal.](./media/invitations.png "List of invitations.") 1. Select the share you want to view.
data-share Subscribe To Data Share https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/subscribe-to-data-share.md
Previously updated : 03/24/2021 Last updated : 09/10/2021 # Tutorial: Accept and receive data using Azure Data Share
Sign in to the [Azure portal](https://portal.azure.com/).
To open invitation from Azure portal directly, search for **Data Share Invitations** in Azure portal. This action takes you to the list of Data Share invitations.
+ If you are a guest user of a tenant, you will be asked to verify your email address for the tenant prior to viewing Data Share invitation for the first time. Once verified, it is valid for 12 months.
+ ![List of Invitations](./media/invitations.png "List of invitations")
-1. Select the share you would like to view.
+1. Select the invitation you would like to view.
### [Azure CLI](#tab/azure-cli)
data-share Supported Data Stores https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/supported-data-stores.md
Previously updated : 04/20/2021 Last updated : 09/10/2021 # Supported data stores in Azure Data Share
When consumers accept data into Azure Data Lake Storage Gen2 or Azure Blob Stora
For more information, see [Share and receive data from Azure SQL Database and Azure Synapse Analytics](how-to-share-from-sql.md). ## Share from Data Explorer
-Azure Data Share supports the ability to share databases in-place from Azure Data Explorer clusters. A data provider can share at the level of the database or the cluster.
+Azure Data Share supports the ability to share databases in-place from Azure Data Explorer clusters. A data provider can share at the level of the database or the cluster. If you are using Data Share API to share data, you can also share specific tables.
When data is shared at the database level, data consumers can access only the databases that the data provider shared. When a provider shares data at the cluster level, data consumers can access all of the databases from the provider's cluster, including any future databases that the data provider creates.
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/concept-supported-protocols.md
# Support for IoT, OT, ICS, and SCADA protocols
-Azure Defender for IoT provides an open and interoperable Operation Technology (OT) cybersecurity platform. Defender for IoT is deployed in many different locations, and reduces Industrial IoT (IT), and ICS risk with deployments in demanding, and complex OT environments across all industry verticals, and geographies.
+Azure Defender for IoT provides an open and interoperable Operation Technology (OT) cybersecurity platform. Defender for IoT is deployed in many different locations, and reduces IoT, IT, and ICS risk with deployments in demanding, and complex OT environments across all industry verticals, and geographies.
## Supported protocols Azure Defender for IoT supports a broad range of protocols across a diverse enterprise, and includes industrial automation equipment across all industrial sectors, enterprise networks, and building management system (BMS) environments. For custom or proprietary protocols, Microsoft offers an SDK that makes it easy to develop, test, and deploy custom protocol dissectors as plug-ins. The SDK does all this without divulging proprietary information, such as how the protocols are designed, or by sharing PCAPs that may contain sensitive information. The complete list of supported protocols is listed in the table below.
-| Supported Protocol | | | |
+| Supported Protocols | | | |
|--|--|--|--| | AMS (Beckhoff) | GOOSE (IoT/OT) | PCCC (Rockwell) | VLAN (Generic) | | ARP (Generic) | Honeywell Experion (Honeywell) | PCS7 (Siemens) | Wonderware Suitelink (Schneider Electric/Wonderware) |
-| Asterix (IoT/OT) | HL7 (Generic) | Profinet DCP (Siemens/Generic) | Yokogawa HIS Equalize (Yokogawa) |
+| HL7 (Generic) | Profinet DCP (Siemens/Generic) | Yokogawa HIS Equalize (Yokogawa) |
| ASTM (Generic) | ICMP (Generic) | Profinet Realtime (Siemens/Generic) | | | BACnet (IoT/OT) | IEC 60870 (IEC104/101) (IoT/OT) | RPC (Generic) | | | BeckhoffTwincat (Beckhoff) | IPv4 (Generic) | Yokogawa VNet/IP (Yokogawa) | |
Azure Defender for IoT supports a broad range of protocols across a diverse ente
| CDP (Cisco) | LLDP (Generic) | Siemens S7 (Siemens) | | | CITECTSCADA ODBC SERVICE (Citect) | Lontalk (IoT/OT) | Siemens S7-Plus (Siemens) | | | Codesys V3 (Generic) | Mitsubishi Melsec/Melsoft (Mitsubishi) | Siemens SICAM (Siemens) |
-| DICOM (Generic) | MMS (including ABB extension) (ABB / Generic) | Siemens WinCC (Siemens) |
+| MMS (including ABB extension) (ABB / Generic) | Siemens WinCC (Siemens) |
| DNP3 (IoT/OT) | Modbus over CIP (Rockwell) | SMB / Browse / NBDGM (Generic) | | | DNS (Generic) | Modbus RTU (IoT/OT) | SMV (SAMPLED-VALUES) (IoT/OT) | | Emerson DeltaV (Emerson) | Modbus Schneider Electric extensions / Unity (Schneider Electric) | SSH (Generic) | |
Azure Defender for IoT supports a broad range of protocols across a diverse ente
| EtherNet/IP CIP (including Rockwell extension) (Rockwell) | netbios (Generic) | TDS (Oracle) | | | Euromap 63 (IoT/OT) | NTLM (Generic) | TNS (Oracle) | | | GE EGD (GE) | Omron FINS (Omron) | Toshiba Computer Link (Toshiba) | |
-| GE-SRTP (GE) | OSISoft (OSI Soft) | UDP (Generic) | |
+| GE-SRTP (GE) | UDP (Generic) | |
-## Add support for restricted protocols
+Horizon community protocol dissectors and proprietary protocol dissectors developed by customers are also supported. See [Horizon proprietary protocol dissector](references-horizon-sdk.md) for details.
+
+## Quickly add support for proprietary restricted protocols
The Industrial Internet of Things (IIoT) unlocks new levels of productivity. These in turn help organizations improve security, increase output, and maximize revenue. Digitalization is driving deployment of billions of IoT devices, and increases the connectivity between IT, and OT networks. This increase in connectivity increases the attack surface, and allows for a greater risk of dangerous cyberattacks on industrial control systems. The Horizon Protocol SDK allows quick support for 100% of the protocols used in IoT, and ICS environments. Custom, or proprietary protocols can be limited so that they are not able to be shared outside your organization. Either due to regulations, or corporate policies. The Horizon SDK allows you to write plug-ins that enable Deep Packet Inspection (DPI) on the traffic, and detect threats in real time. The Horizon SDK makes extra customizations possible as well. For example, the Horizon SDK enables asset vendors, partners, or platform owners to localize and customize the text for alerts, events, and protocol parameters.
-[![The Horizon SDK allows quick support for 100% of the protocols used in IOT, and ICS environments.](media/concept-supported-protocols/sdk-horizon.png)](media/concept-supported-protocols/sdk-horizon-expanded.png#lightbox)
+[![The Horizon SDK allows quick support for 100% of the protocols used in IoT, and ICS environments.](media/concept-supported-protocols/sdk-horizon.png)](media/concept-supported-protocols/sdk-horizon-expanded.png#lightbox)
+
+## Collaborate with the Horizon community
+
+Be part of a community that is leading the way toward digital transformation and industry-wide collaboration for protocol support. The Horizon ICS community allows knowledge sharing for domain experts in critical infrastructures, building management, production lines, transportation systems, and other industrial leaders.
+
+The community provides tutorials, discussion forums, instructor-led training, educational white papers, webinars, and more.
+
+We invite you to join our community here: <horizon-community@microsoft.com>
## Next steps
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-install-software.md
Title: Defender for IoT installation description: Learn how to install a sensor and the on-premises management console for Azure Defender for IoT. Previously updated : 06/21/2021 Last updated : 09/12/2021
This article describes how to install the following elements of Azure Defender f
This article covers the following installation information:
- - **Hardware:** Dell and HPE physical appliance details.
+- **Hardware:** Dell and HPE physical appliance details.
- - **Software:** Sensor and on-premises management console software installation.
+- **Software:** Sensor and on-premises management console software installation.
- - **Virtual Appliances:** Virtual machine details and software installation.
+- **Virtual Appliances:** Virtual machine details and software installation.
After installation, connect your sensor to your network.
-## About Defender for IoT appliances
+## About Defender for IoT appliances
The following sections provide information about Defender for IoT sensor appliances and the appliance for the Defender for IoT on-premises management console.
To install:
Before the installation, ensure you have:
- - Rufus installed.
+- Rufus installed.
- - A disk on key with USB version 3.0 and later. The minimum size is 4 GB.
+- A disk on key with USB version 3.0 and later. The minimum size is 4 GB.
- - An ISO installer image file.
+- An ISO installer image file.
The disk on a key will be erased in this process.
To prepare a disk on a key:
Before installing the software on the Dell appliance, you need to adjust the appliance's BIOS configuration:
- - [Dell PowerEdge R340 Front Panel](#dell-poweredge-r340-front-panel) and [Dell PowerEdge R340 Back Panel](#dell-poweredge-r340-back-panel) contains the description of front and back panels, along with information required for installation, such as drivers and ports.
+- [Dell PowerEdge R340 Front Panel](#dell-poweredge-r340-front-panel) and [Dell PowerEdge R340 Back Panel](#dell-poweredge-r340-back-panel) contains the description of front and back panels, along with information required for installation, such as drivers and ports.
- - [Dell BIOS Configuration](#dell-bios-configuration) provides information about how to connect to the Dell appliance management interface and configure the BIOS.
+- [Dell BIOS Configuration](#dell-bios-configuration) provides information about how to connect to the Dell appliance management interface and configure the BIOS.
- - [Software Installation (Dell R340)](#software-installation-dell-r340) describes the procedure required to install the Defender for IoT sensor software.
+- [Software Installation (Dell R340)](#software-installation-dell-r340) describes the procedure required to install the Defender for IoT sensor software.
-### Dell PowerEdge R340XL requirements
+### Dell PowerEdge R340XL requirements
To install the Dell PowerEdge R340XL appliance, you need:
To install the Dell PowerEdge R340XL appliance, you need:
:::image type="content" source="media/tutorial-install-components/view-of-dell-poweredge-r340-front-panel.jpg" alt-text="Dell PowerEdge R340 front panel.":::
- 1. Left control panel
- 1. Optical drive (optional)
- 1. Right control panel
- 1. Information tag
+ 1. Left control panel
+ 1. Optical drive (optional)
+ 1. Right control panel
+ 1. Information tag
1. Drives ### Dell PowerEdge R340 back panel :::image type="content" source="media/tutorial-install-components/view-of-dell-poweredge-r340-back-panel.jpg" alt-text="Dell PowerEdge R340 back panel.":::
-1. Serial port
-1. NIC port (Gb 1)
-1. NIC port (Gb 1)
-1. Half-height PCIe
-1. Full-height PCIe expansion card slot
-1. Power supply unit 1
-1. Power supply unit 2
-1. System identification
-1. System status indicator cable port (CMA) button
-1. USB 3.0 port (2)
-1. iDRAC9 dedicated network port
-1. VGA port
+1. Serial port
+1. NIC port (Gb 1)
+1. NIC port (Gb 1)
+1. Half-height PCIe
+1. Full-height PCIe expansion card slot
+1. Power supply unit 1
+1. Power supply unit 2
+1. System identification
+1. System status indicator cable port (CMA) button
+1. USB 3.0 port (2)
+1. iDRAC9 dedicated network port
+1. VGA port
### Dell BIOS configuration Dell BIOS configuration is required to adjust the Dell appliance to work with the software.
-The BIOS configuration is performed through a predefined configuration. The file is accessible from the [Help Center](https://cyberx-labs.zendesk.com/hc/).
-
-Import the configuration file to the Dell appliance. Before using the configuration file, you need to establish the communication between the Dell appliance and the management computer.
- The Dell appliance is managed by an integrated iDRAC with Lifecycle Controller (LC). The LC is embedded in every Dell PowerEdge server and provides functionality that helps you deploy, update, monitor, and maintain your Dell PowerEdge appliances. To establish the communication between the Dell appliance and the management computer, you need to define the iDRAC IP address and the management computer's IP address on the same subnet. When the connection is established, the BIOS is configurable.
-To configure Dell BIOS:
+**To configure Dell BIOS**:
1. [Configure the iDRAC IP address](#configure-idrac-ip-address)
-1. [Import the BIOS configuration file](#import-the-bios-configuration-file)
+1. [Configuring the BIOS](#configuring-the-bios)
#### Configure iDRAC IP address
To configure Dell BIOS:
1. Select **Back** > **Finish**.
-#### Import the BIOS configuration file
-
-This section describes how to configure the BIOS by using the configuration file.
-
-1. Plug in a PC with a static preconfigured IP address **10.100.100.200** to the **iDRAC** port.
-
- :::image type="content" source="media/tutorial-install-components/idrac-port.png" alt-text="Screenshot of the preconfigured IP address port.":::
-
-1. Open a browser and enter **10.100.100.250** to connect to iDRAC web interface.
-
-1. Sign in with Dell default administrator privileges:
-
- - Username: **root**
-
- - Password: **calvin**
+#### Configuring the BIOS
-1. The appliance's credentials are:
-
- - Username: **XXX**
-
- - Password: **XXX**
-
- The import server profile operation is initiated.
-
- > [!NOTE]
- > Before you import the file, make sure:
- > - You're the only user who is currently connected to iDRAC.
- > - The system is not in the BIOS menu.
-
-1. Go to **Configuration** > **Server Configuration Profile**. Set the following parameters:
-
- :::image type="content" source="media/tutorial-install-components/configuration-screen.png" alt-text="Screenshot that shows the configuration of your server profile.":::
-
- | Parameter | Configuration |
- |--|--|
- | Location Type | Select **Local**. |
- | File Path | Select **Choose File** and add the configuration XML file. |
- | Import Components | Select **BIOS, NIC, RAID**. |
- | Maximum wait time | Select **20 minutes**. |
-
-1. Select **Import**.
-
-1. To monitor the process, go to **Maintenance** > **Job Queue**.
-
- :::image type="content" source="media/tutorial-install-components/view-the-job-queue.png" alt-text="Screenshot that shows Job Queue.":::
-
-#### Manually configuring BIOS
-
-You need to manually configure the appliance BIOS if:
+You need to configure the appliance BIOS if:
- You did not purchase your appliance from Arrow.
You need to manually configure the appliance BIOS if:
After you access the BIOS, go to **Device Settings**.
-To manually configure:
+**To configure the BIOS**:
-1. Access the appliance BIOS directly by using a keyboard and screen, or use iDRAC.
+1. Access the appliance's BIOS directly by using a keyboard and screen, or use iDRAC.
- If the appliance is not a Defender for IoT appliance, open a browser and go to the IP address that was configured before. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
To install:
This section describes the HPE ProLiant DL20 installation process, which includes the following steps:
- - Enable remote access and update the default administrator password.
- - Configure BIOS and RAID settings.
- - Install the software.
+- Enable remote access and update the default administrator password.
+- Configure BIOS and RAID settings.
+- Install the software.
### About the installation
- - Enterprise and SMB appliances can be installed. The installation process is identical for both appliance types, except for the array configuration.
- - A default administrative user is provided. We recommend that you change the password during the network configuration process.
- - During the network configuration process, you'll configure the iLO port on network port 1.
- - The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+- Enterprise and SMB appliances can be installed. The installation process is identical for both appliance types, except for the array configuration.
+- A default administrative user is provided. We recommend that you change the password during the network configuration process.
+- During the network configuration process, you'll configure the iLO port on network port 1.
+- The installation process takes about 20 minutes. After the installation, the system is restarted several times.
### HPE ProLiant DL20 front panel
To enable and update the password:
:::image type="content" source="media/tutorial-install-components/system-configuration-window-v2.png" alt-text="Screenshot that shows the System Configuration window.":::
- 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
-
- 1. Disable DHCP.
-
- 1. Enter the IP address, subnet mask, and gateway IP address.
+ 1. Select **Shared Network Port-LOM** from the **Network Interface Adapter** field.
+
+ 1. Disable DHCP.
+
+ 1. Enter the IP address, subnet mask, and gateway IP address.
1. Select **F10: Save**. 1. Select **Esc** to get back to the **iLO 5 Configuration Utility**, and then select **User Management**.
-1. Select **Edit/Remove User**. The administrator is the only default user defined.
+1. Select **Edit/Remove User**. The administrator is the only default user defined.
1. Change the default password and select **F10: Save**.
To enable and update the password:
The following procedure describes how to configure the HPE BIOS for the enterprise and SMB appliances.
-To configure the HPE BIOS:
+**To configure the HPE BIOS**:
1. Select **System Utilities** > **System Configuration** > **BIOS/Platform Configuration (RBSU)**.
To configure the HPE BIOS:
:::image type="content" source="media/tutorial-install-components/boot-override-window-one-v2.png" alt-text="Screenshot that shows the first Boot Override window."::: :::image type="content" source="media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot that shows the second Boot Override window.":::+ ### Software installation (HPE ProLiant DL20 appliance) The installation process takes about 20 minutes. After the installation, the system is restarted several times.
To install the software:
:::image type="content" source="media/tutorial-install-components/select-english-screen.png" alt-text="Selection of English in the CLI window.":::
-1. Select **SENSOR-RELEASE-<version> Enterprise**.
+1. Select **SENSOR-RELEASE-\<version> Enterprise**.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot of the screen for selecting a version.":::
To install the software:
## HPE ProLiant DL360 installation
- - A default administrative user is provided. We recommend that you change the password during the network configuration.
+- A default administrative user is provided. We recommend that you change the password during the network configuration.
- - During the network configuration, you'll configure the iLO port.
+- During the network configuration, you'll configure the iLO port.
- - The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+- The installation process takes about 20 minutes. After the installation, the system is restarted several times.
### HPE ProLiant DL360 front panel
To install the software:
Refer to the preceding sections for HPE ProLiant DL20 installation:
- - "Enable remote access and update the password"
+- "Enable remote access and update the password"
- - "Configure the HPE BIOS"
+- "Configure the HPE BIOS"
The enterprise configuration is identical.
To install:
1. Select **English**.
-1. Select **SENSOR-RELEASE-<version> Enterprise**.
+1. Select **SENSOR-RELEASE-\<version> Enterprise**.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Screenshot that shows selecting the version.":::
To install:
The following procedure describes how to configure the BIOS for HP EL300 appliance.
-To configure the BIOS:
+**To configure the BIOS**:
1. Turn on the appliance and push **F9** to enter the BIOS.
You can deploy the virtual machine for the Defender for IoT sensor in the follow
The on-premises management console supports both VMware and Hyper-V deployment options. Before you begin the installation, make sure you have the following items:
- - VMware (ESXi 5.5 or later) or Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational
+- VMware (ESXi 5.5 or later) or Hyper-V hypervisor (Windows 10 Pro or Enterprise) installed and operational
- - Available hardware resources for the virtual machine
+- Available hardware resources for the virtual machine
- - ISO installation file for the Azure Defender for IoT sensor
+- ISO installation file for the Azure Defender for IoT sensor
Make sure the hypervisor is running.
To create a virtual machine:
1. Enter the name and location for the VHD.
-1. Enter the required size (according to the architecture).
+1. Enter the required size (according to the architecture).
1. Review the summary and select **Finish**.
To create a virtual machine:
1. Start the virtual machine.
-2. On the **Actions** menu, select **Connect** to continue the software installation.
+1. On the **Actions** menu, select **Connect** to continue the software installation.
### Software installation (ESXi and Hyper-V)
Before installing the software on the appliance, you need to adjust the applianc
### BIOS configuration
-To configure the BIOS for your appliance:
+**To configure the BIOS for your appliance**:
1. [Enable remote access and update the password](#enable-remote-access-and-update-the-password).
To configure the BIOS for your appliance:
### Software installation
-The installation process takes about 20 minutes. After the installation, the system is restarted several times.
+The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-During the installation process, you will can add a secondary NIC. If you choose not to install the secondary NIC during installation, you can [add a secondary NIC](#add-a-secondary-nic) at a later time.
+During the installation process, you will can add a secondary NIC. If you choose not to install the secondary NIC during installation, you can [add a secondary NIC](#add-a-secondary-nic) at a later time.
To install the software: 1. Select your preferred language for the installation process.
- :::image type="content" source="media/tutorial-install-components/on-prem-language-select.png" alt-text="Select your preferred language for the installation process.":::
+ :::image type="content" source="media/tutorial-install-components/on-prem-language-select.png" alt-text="Select your preferred language for the installation process.":::
1. Select **MANAGEMENT-RELEASE-\<version\>\<deployment type\>**.
- :::image type="content" source="media/tutorial-install-components/on-prem-install-screen.png" alt-text="Select your version.":::
+ :::image type="content" source="media/tutorial-install-components/on-prem-install-screen.png" alt-text="Select your version.":::
1. In the Installation Wizard, define the network properties:
- :::image type="content" source="media/tutorial-install-components/on-prem-first-steps-install.png" alt-text="Screenshot that shows the appliance profile.":::
+ :::image type="content" source="media/tutorial-install-components/on-prem-first-steps-install.png" alt-text="Screenshot that shows the appliance profile.":::
| Parameter | Configuration | |--|--|
To install the software:
| **configure subnet mask:** | **IP address provided by the customer** | | **configure DNS:** | **IP address provided by the customer** | | **configure default gateway IP address:** | **IP address provided by the customer** |
-
+ 1. **(Optional)** If you would like to install a secondary Network Interface Card (NIC), define the following appliance profile, and network properties: :::image type="content" source="media/tutorial-install-components/on-prem-secondary-nic-install.png" alt-text="Screenshot that shows the Secondary NIC install questions.":::
To install the software:
| **configure an IP address for the sensor monitoring interface:** | **IP address provided by the customer** | | **configure a subnet mask for the sensor monitoring interface:** | **IP address provided by the customer** |
-1. Accept the settlings and continue by typing `Y`.
+1. Accept the settlings and continue by typing `Y`.
1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
You can enhance security to your on-premises management console by adding a seco
:::image type="content" source="media/tutorial-install-components/secondary-nic.png" alt-text="The overall architecture of the secondary NIC.":::
-Both NICs will support the user interface (UI).
-
-If you choose not to deploy a secondary NIC, all of the features will be available through the primary NIC.
+Both NICs will support the user interface (UI).
+If you choose not to deploy a secondary NIC, all of the features will be available through the primary NIC.
If you have already configured your on-premises management console, and would like to add a secondary NIC to your on-premises management console, use the following steps:
If you are having trouble locating the physical port on your device, you can use
sudo ethtool -p <port value> <time-in-seconds> ```
-This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes allowing you to find the port on the back of your appliance.
+This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes allowing you to find the port on the back of your appliance.
## Virtual appliance: On-premises management console installation The on-premises management console VM supports the following architectures:
-| Architecture | Specifications | Usage |
+| Architecture | Specifications | Usage |
|--|--|--| | Enterprise <br/>(Default and most common) | CPU: 8 <br/>Memory: 32G RAM<br/> HDD: 1.8 TB | Large production environments | | Small | CPU: 4 <br/> Memory: 8G RAM<br/> HDD: 500 GB | Large production environments |
-| Office | CPU: 4 <br/>Memory: 8G RAM <br/> HDD: 100 GB | Small test environments |
-
+| Office | CPU: 4 <br/>Memory: 8G RAM <br/> HDD: 100 GB | Small test environments |
+ ### Prerequisites The on-premises management console supports both VMware and Hyper-V deployment options. Before you begin the installation, verify the following:
The on-premises management console supports both VMware and Hyper-V deployment o
- The hardware resources are available for the virtual machine. - You have the ISO installation file for the on-premises management console.
-
+ - The hypervisor is running. ### Create the virtual machine (ESXi)
This section describes installation procedures for *legacy* appliances only. See
### Nuvo 5006LP installation
-This section provides the Nuvo 5006LP installation procedure. Before installing the software on the Nuvo 5006LP appliance, you need to adjust the appliance BIOS configuration.
+This section provides the Nuvo 5006LP installation procedure. Before installing the software on the Nuvo 5006LP appliance, you need to adjust the appliance BIOS configuration.
#### Nuvo 5006LP front panel
To configure the BIOS:
1. Navigate to **Boot** and ensure that **PXE Boot to LAN** is set to **Disabled**.
-1. Press **F10** to save, and then select **Exit**.
+1. Press **F10** to save, and then select **Exit**.
#### Software installation (Nuvo 5006LP)
The installation process takes approximately 20 minutes. After installation, the
1. Select **English**.
-1. Select **XSENSE-RELEASE-<version> Office...**.
+1. Select **XSENSE-RELEASE-\<version> Office...**.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Select the version of the sensor to install.":::
The installation process takes approximately 20 minutes. After installation, the
| -| - | | **Hardware profile** | Select **office**. | | **Management interface** | **eth0** |
- | **Management network IP address** | **IP address provided by the customer** |
- | **Management subnet mask** | **IP address provided by the customer** |
+ | **Management network IP address** | **IP address provided by the customer** |
+ | **Management subnet mask** | **IP address provided by the customer** |
| **DNS** | **IP address provided by the customer** |
- | **Default gateway IP address** | **0.0.0.0** |
+ | **Default gateway IP address** | **0.0.0.0** |
| **Input interface** | The list of input interfaces is generated for you by the system. <br />To mirror the input interfaces, copy all the items presented in the list with a comma separator. | | **Bridge interface** | - |
After approximately 10 minutes, sign-in credentials are automatically generated.
This section provides the Fitlet2 installation procedure. Before installing the software on the Fitlet appliance, you need to adjust the appliance's BIOS configuration.
-#### Fitlet2 front panel
+#### Fitlet2 front panel
:::image type="content" source="media/tutorial-install-components/fitlet-front-panel.png" alt-text="A view of the front panel of the Fitlet 2.":::
This section provides the Fitlet2 installation procedure. Before installing the
1. Navigate to **Boot** > **Boot mode** select, and select **Legacy**. 1. Select **Boot Option #1 ΓÇô [USB CD/DVD]**.
-
+ 1. Select **Save & Exit**. #### Software installation (Fitlet2)
The installation process takes approximately 20 minutes. After installation, the
1. Select **English**.
-1. Select **XSENSE-RELEASE-<version> Office...**.
+1. Select **XSENSE-RELEASE-\<version> Office...**.
:::image type="content" source="media/tutorial-install-components/sensor-version-select-screen-v2.png" alt-text="Select the version of the sensor to install.":::
The installation process takes approximately 20 minutes. After installation, the
| -| - | | **Hardware profile** | Select **office**. | | **Management interface** | **em1** |
- | **Management network IP address** | **IP address provided by the customer** |
- | **Management subnet mask** | **IP address provided by the customer** |
+ | **Management network IP address** | **IP address provided by the customer** |
+ | **Management subnet mask** | **IP address provided by the customer** |
| **DNS** | **IP address provided by the customer** |
- | **Default gateway IP address** | **0.0.0.0** |
+ | **Default gateway IP address** | **0.0.0.0** |
| **Input interface** | The list of input interfaces is generated for you by the system. <br />To mirror the input interfaces, copy all the items presented in the list with a comma separator. | | **Bridge interface** | - |
Perform the validation by using the GUI or the CLI. The validation is available
Post-installation validation must include the following tests:
- - **Sanity test**: Verify that the system is running.
+- **Sanity test**: Verify that the system is running.
- - **Version**: Verify that the version is correct.
+- **Version**: Verify that the version is correct.
- - **ifconfig**: Verify that all the input interfaces configured during the installation process are running.
+- **ifconfig**: Verify that all the input interfaces configured during the installation process are running.
### Check system health by using the GUI
Post-installation validation must include the following tests:
- **Core Log**: Provides the last 500 rows of the core log, enabling you to view the recent log rows without exporting the entire system log. -- **Task Manager**: Translates the tasks that appear in the table of processes to the following layers:
+- **Task Manager**: Translates the tasks that appear in the table of processes to the following layers:
- - Persistent layer (Redis)
+ - Persistent layer (Redis)
- Cash layer (SQL) - **Network Statistics**: Displays your network statistics.
Post-installation validation must include the following tests:
- **TOP**: Shows the table of processes. It's a Linux command that provides a dynamic real-time view of the running system. - **Backup Memory Check**: Provides the status of the backup memory, checking the following:
- - The location of the backup folder
+ - The location of the backup folder
- The size of the backup folder - The limitations of the backup folder - When the last backup happened
defender-for-iot Integration Qradar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/integration-qradar.md
- Title: QRadar integration
-description: Configure the Defender for IoT solution integration with QRadar.
Previously updated : 1/4/2021---
-# About the QRadar integration
-
-Defender for IoT delivers the only ICS and IoT cybersecurity platform built by blue-team experts with a track record defending critical national infrastructure, and the only platform with patented ICS-aware threat analytics and machine learning.
-
-Defender for IoT has integrated its continuous ICS threat monitoring platform with IBM QRadar.
-
-Some of the benefits of the integration include:
--- The ability to forward Azure Defender for IoT alerts to IBM QRadar for unified IT, and OT security monitoring, and governance.--- The ability to gain a bird's eye view across both IT, and OT environments. This allows you to detect, and respond to multi-stage attacks that often cross IT, and OT boundaries.--- Integrate with existing SOC workflows.-
-## Configuring Syslog listener for QRadar
-
-To configure the Syslog listener to work with QRadar:
-
-1. Sign in to QRadar.
-
-1. From the left pane, select **Admin** > **Data Sources**.
-
- [:::image type="content" source="media/integration-qradar/log.png" alt-text="Select log sources from the available options.":::](media/integration-qradar/log.png#lightbox)
-
-1. In the Data Sources window, select **Log Sources**.
-
- [:::image type="content" source="media/integration-qradar/modal.png" alt-text="After selecting Syslog the modal window opens.":::](media/integration-qradar/modal.png#lightbox)
-
-1. In the **Modal** window, select **Add**.
-
- :::image type="content" source="media/integration-qradar/source.png" alt-text="Add a log source by filling in the appropriate fields.":::
-
-1. In the **Add a log source** dialog box, set the following parameters:
-
- - **Log Source Name**: `<XSense Name>`
-
- - **Log Source Description**: `<XSense Name>`
-
- - **Log Source Type**: `Universal LEEF`
-
- - **Protocol Configuration**: `Syslog`
-
- - **Log Source Identifier**: `<XSenseName>`
-
- > [!NOTE]
- > The Log Source Identifier name must not include a white space. It is recommended to replace each white space character with an underscore.
-
-1. Select **Save**.
-
-1. Select **Deploy Changes**.
--
-## Deploying Defender for IoT platform QID
-
-QID is an event identifier in QRadar. All of Defenders for IoT platform reports are tagged under the same event (XSense Alert).
-
-**To deploy Xsense QID**:
-
-1. Sign in to the QRadar console.
-
-1. Create a file named `xsense_qids`.
-
-1. In the file, using the following command: `,XSense Alert,XSense Alert Report From <XSense Name>,5,7001`.
-
-1. Execute: `sudo /opt/qradar/bin/qidmap_cli.sh -i -f <path>/xsense_qids`. The message that the QID was deployed successfully appears.
-
-## Setting Up QRadar forwarding rules
-
-In the Defender for IoT appliance, configure a Qradar forwarding rule. Map the rule on the on-premises management console.
-
-**To define QRadar notifications in the Defender for IoT appliance**:
-
-1. In the side menu, select **Forwarding**.
-
- :::image type="content" source="media/integration-qradar/create.png" alt-text="Create a Forwarding Rule":::
-
-1. Set the Action to **QRadar**.
-
-1. Configure the QRadar IP address, and the timezone.
-
-1. Select **Submit**.
-
-**To map notifications to QRadar in the Central Manager**:
-
-1. From the side menu, select **Forwarding**.
-
-1. In the Qradar GUI, under QRadar, select **Log Activity** .
-
-1. Select **Add Filter** and set the following parameters:
- - Parameter: `Log Sources [Indexed]`
- - Operator: `Equals`
- - Log Source Group: `Other`
- - Log Source: `<Xsense Name>`
-
-1. Double-click an unknown report from XSense.
-
-1. Select **Map Event**.
-
-1. In the Modal Log Source Event page, select as follows:
- - High-Level Category - Suspicious Activity + Low-Level Category - Unknown Suspicious Event + Log
- - Source Type - any
-
-1. Select **Search**.
-
-1. From the results, choose the line in which the name XSense appears, and select **OK**.
-
-All the XSense reports from now on are tagged as XSense Alerts.
-
-## Adding custom fields to alerts
-
-**To add custom fields to alerts**:
-
-1. Select **Extract Property**.
-
-1. Select **Regex Based**.
-
-1. Configure the following fields:
- - New Property: _choose from the list below_
- - Xsense Alert Description
- - Xsense Alert ID
- - Xsense Alert Score
- - Xsense Alert Title
- - Xsense Destination Name
- - Xsense Direct Redirect
- - Xsense Sender IP
- - Xsense Sender Name
- - Xsense Alert Engine
- - Xsense Source Device Name
- - Check **Optimize Parsing**
- - Field Type: `AlphaNumeric`
- - Check **Enabled**
- - Log Source Type: `Universal LEAF`
- - Log Source: `<Xsense Name>`
- - Event Name (should be already set as XSense Alert)
- - Capture Group: 1
- - Regex:
- - Xsense Alert Description RegEx: `msg=(.*)(?=\t)`
- - Xsense Alert ID RegEx: `alertId=(.*)(?=\t)`
- - Xsense Alert Score RegEx: `Detected score=(.*)(?=\t)`
- - Xsense Alert Title RegEx: `title=(.*)(?=\t)`
- - Xsense Destination Name RegEx: `dstName=(.*)(?=\t)`
- - Xsense Direct Redirect RegEx: `rta=(.*)(?=\t)`
- - Xsense Sender IP: RegEx: `reporter=(.*)(?=\t)`
- - Xsense Sender Name RegEx: `senderName=(.*)(?=\t)`
- - Xsense Alert Engine RegEx: `engine =(.*)(?=\t)`
- - Xsense Source Device Name RegEx: `src`
-
-## Defining Defender for IoT appliance name
-
-You can change the name of the platform at any time.
-
-When building sites, and assigning appliances to zones in the on-premises management console, you should assign each appliance a significant name. For example, ΓÇ£Motorcycles PL Unit 2ΓÇ¥ means that this appliance is protecting unit #2 in the Motorcycles production line.
-
-It is important to pick a meaningful name for your appliance, because the appliance's name is passed on to the logs. When reviewing logs, each alert has a sensor attached to it. You will be able to identify which sensor is related to each alert based on its name.
-
-**To change the appliance name**:
-
-1. On the side menu, select the current appliance name. The **Edit management console configuration** dialog box appears.
-
- :::image type="content" source="media/integration-qradar/edit-management-console.png" alt-text="Change the name of your console.":::
-
-1. Enter a name in the Name field and select **Save**.
-
-## Next steps
-
-Learn how to [Forward alert information](how-to-forward-alert-information-to-partners.md).
defender-for-iot Tutorial Qradar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/tutorial-qradar.md
+
+ Title: Integrate Qradar with Azure Defender for IoT
+description: In this tutorial, learn how to integrate Qradar with Azure Defender for IoT.
+++ Last updated : 09/12/2021+++
+# Tutorial: Integrate Qradar with Azure Defender for IoT
+
+This tutorial will help you learn how to integrate, and use QRadar with Azure Defender for IoT.
+
+Defender for IoT delivers the only ICS, and IoT cybersecurity platform with patented ICS-aware threat analytics and machine learning.
+
+Defender for IoT has integrated its continuous ICS threat monitoring platform with IBM QRadar.
+
+Some of the benefits of the integration include:
+
+- The ability to forward Azure Defender for IoT alerts to IBM QRadar for unified IT, and OT security monitoring, and governance.
+
+- The ability to gain an overview of both IT, and OT environments. Allowing you to detect, and respond to multi-stage attacks that often cross IT, and OT boundaries.
+
+- Integrate with existing SOC workflows.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure Syslog listener for QRadar
+> * Deploy Defender for IoT platform QID
+> * Setup QRadar forwarding rules
+> * Map notifications to QRadar in the Management Console
+> * Add custom fields to alerts
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+There are no prerequisites of this tutorial.
+
+## Configure Syslog listener for QRadar
+
+**To configure the Syslog listener to work with QRadar**:
+
+1. Sign in to QRadar.
+
+1. From the left pane, select **Admin** > **Data Sources**.
+
+1. In the Data Sources window, select **Log Sources**.
+
+ [:::image type="content" source="media/tutorial-qradar/log.png" alt-text="Screenshot of, selecting a log sources from the available options.":::](media/tutorial-qradar/log.png#lightbox)
+
+1. In the **Modal** window, select **Add**.
+
+ [:::image type="content" source="media/tutorial-qradar/modal.png" alt-text="Screenshot of, after selecting Syslog the modal window opens.":::](media/tutorial-qradar/modal.png#lightbox)
+
+1. In the **Add a log source** dialog box, set the following parameters:
+
+ :::image type="content" source="media/tutorial-qradar/source.png" alt-text="Screenshot of, adding a log source by filling in the appropriate fields.":::
+
+ - **Log Source Name**: `<Sensor name>`
+
+ - **Log Source Description**: `<Sensor name>`
+
+ - **Log Source Type**: `Universal LEEF`
+
+ - **Protocol Configuration**: `Syslog`
+
+ - **Log Source Identifier**: `<Sensor name>`
+
+ > [!NOTE]
+ > The Log Source Identifier name must not include a white space. It is recommended to replace each white space character with an underscore.
+
+1. Select **Save**.
+
+1. Select **Deploy Changes**.
+
+ :::image type="content" source="media/tutorial-qradar/deploy.png" alt-text="Screenshot of Deploy Changes view":::
+
+## Deploy Defender for IoT platform QID
+
+QID is an event identifier in QRadar. All of Defenders for IoT platform reports are tagged under the same event (Sensor Alert).
+
+**To deploy Defender for IoT platform QID**:
+
+1. Sign in to the QRadar console.
+
+1. Create a file named `xsense_qids`.
+
+1. In the file, using the following command: `,XSense Alert,XSense Alert Report From <XSense Name>,5,7001`.
+
+1. Execute: `sudo /opt/qradar/bin/qidmap_cli.sh -i -f <path>/xsense_qids`. The message that the QID was deployed successfully appears.
+
+## SetUp QRadar forwarding rules
+
+For the integration to work, you will need to setup in the Defender for IoT appliance, a Qradar forwarding rule.
+
+**To define QRadar notifications in the Defender for IoT appliance**:
+
+1. In the side menu, select **Forwarding**.
+
+1. Select **Create Forwarding Rule**.
+
+1. Set the Action to **QRadar**.
+
+ :::image type="content" source="media/tutorial-qradar/create.png" alt-text="Screenshot of, create a Forwarding Rule window.":::
+
+1. Configure the QRadar IP address, and the timezone.
+
+1. Select **Submit**.
+
+## Map notifications to QRadar
+
+The rule must then be mapped on the on-premises management console.
+
+**To map the notifications to QRadar**:
+
+1. Sign in to the management console.
+
+1. From the left side pane, select **Forwarding**.
+
+1. In the Qradar GUI, under QRadar, select **Log Activity** .
+
+1. Select **Add Filter** and set the following parameters:
+ - Parameter: `Log Sources [Indexed]`
+ - Operator: `Equals`
+ - Log Source Group: `Other`
+ - Log Source: `<Xsense Name>`
+
+1. Double-click an unknown report from the sensor.
+
+1. Select **Map Event**.
+
+1. In the Modal Log Source Event page, select as follows:
+ - High-Level Category - Suspicious Activity + Low-Level Category - Unknown Suspicious Event + Log
+ - Source Type - any
+
+1. Select **Search**.
+
+1. From the results, choose the line in which the name XSense appears, and select **OK**.
+
+All of the sensor reports from now on are tagged as Sensor Alerts.
+
+## Add custom fields to the alerts
+
+**To add custom fields to alerts**:
+
+1. Select **Extract Property**.
+
+1. Select **Regex Based**.
+
+1. Configure the following fields:
+ - New Property: _choose from the list below_
+ - Sensor Alert Description
+ - Sensor Alert ID
+ - Sensor Alert Score
+ - Sensor Alert Title
+ - Sensor Destination Name
+ - Sensor Direct Redirect
+ - Sensor Sender IP
+ - Sensor Sender Name
+ - Sensor Alert Engine
+ - Sensor Source Device Name
+ - Check **Optimize Parsing**
+ - Field Type: `AlphaNumeric`
+ - Check **Enabled**
+ - Log Source Type: `Universal LEAF`
+ - Log Source: `<Sensor Name>`
+ - Event Name (should be already set as Sensor Alert)
+ - Capture Group: 1
+ - Regex:
+ - Sensor Alert Description RegEx: `msg=(.*)(?=\t)`
+ - Sensor Alert ID RegEx: `alertId=(.*)(?=\t)`
+ - Sensor Alert Score RegEx: `Detected score=(.*)(?=\t)`
+ - Sensor Alert Title RegEx: `title=(.*)(?=\t)`
+ - Sensor Destination Name RegEx: `dstName=(.*)(?=\t)`
+ - Sensor Direct Redirect RegEx: `rta=(.*)(?=\t)`
+ - Sensor Sender IP: RegEx: `reporter=(.*)(?=\t)`
+ - Sensor Sender Name RegEx: `senderName=(.*)(?=\t)`
+ - Sensor Alert Engine RegEx: `engine =(.*)(?=\t)`
+ - Sensor Source Device Name RegEx: `src`
+
+## Clean up resources
+
+There are no resources to clean up.
+
+## Next steps
+
+In this tutorial, you learned how to get started with the QRadar integration. Continue on to learn how to [Integrate ServiceNow with Azure Defender for IoT](tutorial-servicenow.md).
+
+> [!div class="nextstepaction"]
+> [Integrate ServiceNow with Azure Defender for IoT](tutorial-servicenow.md)
defender-for-iot Tutorial Splunk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/tutorial-splunk.md
description: In this tutorial, learn how to integrate Splunk with Azure Defender
Previously updated : 08/03/2021 Last updated : 09/12/2021
In this tutorial, you learn how to:
> * Download the Defender for IoT application in Splunk > * Send Defender for IoT alerts to Splunk
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ ## Prerequisites ### Version requirements
To send alert information to the Splunk servers from Defender for IoT, you will
1. Select **Submit**.
+## Clean up resources
+
+There are no resources to clean up.
+ ## Next steps In this tutorial, you learned how to get started with the Splunk integration. Continue on to learn how to [Integrate ServiceNow with Azure Defender for IoT](tutorial-servicenow.md).
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-migrate.md
param (
[string] $PolicyId,
- #new firewallpolicy name, if not specified will be the previous name with the '_premium' suffix
+ #new filewallpolicy name, if not specified will be the previous name with the '_premium' suffix
[Parameter(Mandatory=$false)] [string] $NewPolicyName = ""
function TransformPolicyToPremium {
ResourceGroupName = $Policy.ResourceGroupName Location = $Policy.Location ThreatIntelMode = $Policy.ThreatIntelMode
- BasePolicy = $Policy.BasePolicy
+ BasePolicy = $Policy.BasePolicy.Id
DnsSetting = $Policy.DnsSettings Tag = $Policy.Tag SkuTier = "Premium"
ValidateAzNetworkModuleExists
$policy = Get-AzFirewallPolicy -ResourceId $script:PolicyId ValidatePolicy -Policy $policy TransformPolicyToPremium -Policy $policy+ ``` ## Migrate an existing standard firewall using the Azure portal
frontdoor Front Door Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-health-probes.md
Azure Front Door uses the same three-step process below across all algorithms to
3. For the sets of healthy backends in the backend pool, Front Door additionally measures and maintains the latency (round-trip time) for each backend. > [!NOTE]
-> If a single endpoint is a member of multiple backend pools, Azure Front Door optimizes the number of health probes sent to the backend to reduce the load on the backend. Health probe requests will be sent based on the lowest configured sample interval. The health of the endpoint in all pools will be determined by the responses from the same health probe.
+> If a single endpoint is a member of multiple backend pools, Azure Front Door optimizes the number of health probes sent to the backend to reduce the load on the backend. Health probe requests will be sent based on the lowest configured sample interval. The health of the endpoint in all pools will be determined by the responses from same health probes.
## Complete health probe failure
frontdoor Concept Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/concept-health-probes.md
Azure Front Door uses the same three-step process below across all algorithms to
1. For the sets of healthy backends in the backend pool, Front Door additionally measures and maintains the latency (round-trip time) for each backend. > [!NOTE]
-> If a single endpoint is a member of multiple backend pools, Azure Front Door optimizes the number of health probes sent to the backend to reduce the load on the backend. Health probe requests will be sent based on the lowest configured sample interval. The health of the endpoint in all pools will be determined by the responses from the same health probe.
+> If a single endpoint is a member of multiple backend pools, Azure Front Door optimizes the number of health probes sent to the backend to reduce the load on the backend. Health probe requests will be sent based on the lowest configured sample interval. The health of the endpoint in all pools will be determined by the responses from same health probes.
## Complete health probe failure
genomics Quickstart Input Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/genomics/quickstart-input-sas.md
There are two ways to create a SAS token, either using Azure Storage Explorer or
[Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) is a tool to manage resources that you have stored in Azure Storage. You can learn more about how to use Azure Storage Explorer [here](../vs-azure-tools-storage-manage-with-storage-explorer.md).
-The SAS for the input files should be scoped to the specific input file (blob). To create a SAS token, follow [these instructions](../storage/blobs/storage-quickstart-blobs-storage-explorer.md). Once you have created the SAS, the full URL with the query string as well as the query string by itself are provided and can be copied from the screen.
+The SAS for the input files should be scoped to the specific input file (blob). To create a SAS token, follow [these instructions](../storage/blobs/quickstart-storage-explorer.md). Once you have created the SAS, the full URL with the query string as well as the query string by itself are provided and can be copied from the screen.
![Genomics SAS Storage Explorer](./media/quickstart-input-sas/genomics-sas-storageexplorer.png "Genomics SAS Storage Explorer")
hdinsight Hdinsight Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-upload-data.md
There are also several applications that provide a graphical interface for worki
| Client | Linux | OS X | Windows | | |::|::|::| | [Microsoft Visual Studio Tools for HDInsight](hadoop/apache-hadoop-visual-studio-tools-get-started.md#explore-linked-resources) |Γ£ö |Γ£ö |Γ£ö |
-| [Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md) |Γ£ö |Γ£ö |Γ£ö |
+| [Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md) |Γ£ö |Γ£ö |Γ£ö |
| [`Cerulea`](https://www.cerebrata.com/products/cerulean/features/azure-storage) | | |Γ£ö | | [CloudXplorer](https://clumsyleaf.com/products/cloudxplorer) | | |Γ£ö | | [CloudBerry Explorer for Microsoft Azure](https://www.cloudberrylab.com/free-microsoft-azure-explorer.aspx) | | |Γ£ö |
hdinsight Apache Spark Intellij Tool Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-intellij-tool-plugin.md
In this article, you learn how to:
## Prerequisites
-* An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md). Only HDinsight clusters in public cloud are supported while other secure cloud types (e.g. government clouds) are not.
+* An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md). Only HDInsight clusters in public cloud are supported while other secure cloud types (e.g. government clouds) are not.
* [Oracle Java Development kit](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html). This article uses Java version 8.0.202.
After creating a Scala application, you can submit it to the cluster.
|Main class name|The default value is the main class from the selected file. You can change the class by selecting the ellipsis(**...**) and choosing another class.| |Job configurations|You can change the default keys and, or values. For more information, see [Apache Livy REST API](https://livy.incubator.apache.org/docs/latest/rest-api.html).| |Command-line arguments|You can enter arguments separated by space for the main class if needed.|
- |Referenced Jars and Referenced Files|You can enter the paths for the referenced Jars and files if any. You can also browse files in the Azure virtual file system, which currently only supports ADLS Gen 2 cluster. For more information: [Apache Spark Configuration](https://spark.apache.org/docs/latest/configuration.html#runtime-environment). See also, [How to upload resources to cluster](../../storage/blobs/storage-quickstart-blobs-storage-explorer.md).|
+ |Referenced Jars and Referenced Files|You can enter the paths for the referenced Jars and files if any. You can also browse files in the Azure virtual file system, which currently only supports ADLS Gen 2 cluster. For more information: [Apache Spark Configuration](https://spark.apache.org/docs/latest/configuration.html#runtime-environment). See also, [How to upload resources to cluster](../../storage/blobs/quickstart-storage-explorer.md).|
|Job Upload Storage|Expand to reveal additional options.| |Storage Type|Select **Use Azure Blob to upload** from the drop-down list.| |Storage Account|Enter your storage account.|
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-maps.md
If you're new to logic apps, review the following documentation:
||-| | [Azure storage account](../storage/common/storage-account-overview.md) | In this account, create an Azure blob container for your assembly. Learn [how to create a storage account](../storage/common/storage-account-create.md). | | Blob container | In this container, you can upload your assembly. You also need this container's content URI location when you add the assembly to your integration account. Learn how to [create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md). |
- | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, either [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md). <p>Or, in the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. |
+ | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, either [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md). <p>Or, in the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. |
||| * To add larger maps for Consumption logic app resources, you can also use the [Azure Logic Apps REST API - Maps](/rest/api/logic/maps/createorupdate). However, for Standard logic app resources, the Azure Logic Apps REST API is currently unavailable.
logic-apps Logic Apps Enterprise Integration Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-schemas.md
If you're new to logic apps, review the following documentation:
||-| | [Azure storage account](../storage/common/storage-account-overview.md) | In this account, create an Azure blob container for your schema. Learn [how to create a storage account](../storage/common/storage-account-create.md). | | Blob container | In this container, you can upload your schema. You also need this container's content URI later when you add the schema to your integration account. Learn how to [create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md). |
- | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, choose a step: <p>- In the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. <p>- For the desktop version, [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md). |
+ | [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) | This tool helps you more easily manage storage accounts and blob containers. To use Storage Explorer, choose a step: <p>- In the Azure portal, select your storage account. From your storage account menu, select **Storage Explorer**. <p>- For the desktop version, [download and install Azure Storage Explorer](https://www.storageexplorer.com/). Then, connect Storage Explorer to your storage account by following the steps in [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). To learn more, see [Quickstart: Create a blob in object storage with Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md). |
||| To add larger maps for Consumption logic app resources, you can also use the [Azure Logic Apps REST API - Schemas](/rest/api/logic/schemas/create-or-update). However, for Standard logic app resources, the Azure Logic Apps REST API is currently unavailable.
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-using-sap-connector.md
The managed SAP connector integrates with SAP systems through your [on-premises
An ISE provides access to resources that are protected by an Azure virtual network and offers other ISE-native connectors that let logic app workflows directly access on-premises resources without using the on-premises data gateway.
-1. If you don't already have an Azure Storage account with a blob container, create a container using either the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Storage Explorer](../storage/blobs/storage-quickstart-blobs-storage-explorer.md).
+1. If you don't already have an Azure Storage account with a blob container, create a container using either the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md).
1. [Download and install the latest SAP client library](#sap-client-library-prerequisites) on your local computer. You should have the following assembly files:
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| Australia Southeast | :heavy_check_mark: | :heavy_check_mark: | :x: | | Brazil South | :heavy_check_mark: | :heavy_check_mark: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Central India | :heavy_check_mark: | :heavy_check_mark: | :x: |
| Central US | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Central India | :heavy_check_mark: | :x: | :x: |
+| East Asia (Hong Kong) | :heavy_check_mark: | :heavy_check_mark: | :x: |
| East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| East Asia (Hong Kong) | :heavy_check_mark: | :x: | :x: |
-| France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:|
+| France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Korea Central | :heavy_check_mark: | :x: | :x: |
-| Korea South | :heavy_check_mark: | :x: | :x: |
+| Korea Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Korea South | :heavy_check_mark: | :heavy_check_mark: | :x: |
| North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Norway East | :heavy_check_mark: | :heavy_check_mark: | :x: | | Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| South Africa North | :heavy_check_mark: | :x: | :x: |
-| Switzerland North | :heavy_check_mark: | :x: | :x: |
+| South Central US | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| South Africa North | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :x: |
| UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | UAE North | :heavy_check_mark: | :heavy_check_mark: | :x: | | West US | :heavy_check_mark: | :heavy_check_mark: | :x: | | West US 2 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| West Central US | :heavy_check_mark: | :x: | :x: |
+| West Central US | :heavy_check_mark: | :heavy_check_mark: | :x: |
## Contacts
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-maintenance.md
Previously updated : 09/22/2020 Last updated : 09/10/2020 # Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Flexible server
When specifying preferences for the maintenance schedule, you can pick a day of
> [!IMPORTANT] > Normally there are at least 30 days between successful scheduled maintenance events for a server. >
-> However, in case of a critical emergency update such as a severe vulnerability, the notification window could be shorter than five days. The critical update may be applied to your server even if a successful scheduled maintenance was performed in the last 30 days.
+> However, in case of a critical emergency update such as a severe vulnerability, the notification window could be shorter than five days or be omitted. The critical update may be applied to your server even if a successful scheduled maintenance was performed in the last 30 days.
You can update scheduling settings at any time. If there is a maintenance scheduled for your Flexible server and you update scheduling preferences, the current rollout will proceed as scheduled and the scheduling settings change will become effective upon its successful completion for the next scheduled maintenance.
postgresql How To Upgrade Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/how-to-upgrade-using-dump-and-restore.md
You can consider this method if you have few larger tables in your database and
> [!TIP] > The process mentioned in this document can also be used to upgrade your Azure Database for PostgreSQL - Flexible server, which is in Preview. The main difference is the connection string for the flexible server target is without the `@dbName`. For example, if the user name is `pg`, the single serverΓÇÖs username in the connect string will be `pg@pg-95`, while with flexible server, you can simply use `pg`.
+## Post upgrade/migrate
+After the major version upgrade is complete, we recommend to run the `ANALYZE` command in each database to refresh the `pg_statistic` table. Otherwise, you may run into performance issues.
+
+```SQL
+postgres=> analyze;
+ANALYZE
+```
+ ## Next steps - After you're satisfied with the target database function, you can drop your old database server.
security-center Security Center Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-pricing.md
No. When you enable [Azure Defender for servers](defender-for-servers-introducti
:::image type="content" source="media/security-center-pricing/deallocated-virtual-machines.png" alt-text="Azure Virtual Machines showing a deallocated machine."::: ### Will I be charged for machines without the Log Analytics agent installed?
-Yes. When you enable [Azure Defender for servers](defender-for-servers-introduction.md) on a subscription, the machines in that subscription get a range of protections even if you haven't installed the Log Analytics agent.
+Yes. When you enable [Azure Defender for servers](defender-for-servers-introduction.md) on a subscription, the machines in that subscription get a range of protections even if you haven't installed the Log Analytics agent. This is applicable for Azure virtual machines, Azure virtual machine scale sets instances, and Azure Arc-enabled servers.
### If a Log Analytics agent reports to multiple workspaces, will I be charged twice? Yes. If you've configured your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll be charged for every workspace that has a 'Security' or 'AntiMalware' solution installed.
This article explained Security Center's pricing options. For related material,
- [How to optimize your Azure workload costs](https://azure.microsoft.com/blog/how-to-optimize-your-azure-workload-costs/) - [Pricing details in your currency of choice, and according to your region](https://azure.microsoft.com/pricing/details/security-center/)-- You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. [Solution targeting](../azure-monitor/insights/solution-targeting.md) allows you to apply a scope to the solution and target a subset of computers in the workspace. If you're using solution targeting, Security Center lists the workspace as not having a solution.
+- You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. [Solution targeting](../azure-monitor/insights/solution-targeting.md) allows you to apply a scope to the solution and target a subset of computers in the workspace. If you're using solution targeting, Security Center lists the workspace as not having a solution.
security-center Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
Previously updated : 08/19/2021 Last updated : 09/12/2021
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |-||
-| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013)| August 2021|
+| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013)| September 2021|
| [Changing prefix of some alert types from "ARM_" to "VM_"](#changing-prefix-of-some-alert-types-from-arm_-to-vm_) | October 2021| | [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | Q4 2021 | | [Enhancements to recommendation to classify sensitive data in SQL databases](#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases) | Q1 2022 |
If you're looking for the latest release notes, you'll find them in the [What's
### Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013
-**Estimated date for change:** August 2021
+**Estimated date for change:** September 2021
The legacy implementation of ISO 27001 will be removed from Security Center's regulatory compliance dashboard. If you're tracking your ISO 27001 compliance with Security Center, onboard the new ISO 27001:2013 standard for all relevant management groups or subscriptions, and the current legacy ISO 27001 will soon be removed from the dashboard.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/data-connectors-reference.md
For more information, refer to Cognito Detect Syslog Guide which can be download
| | | | **Data ingestion method** | [**Log Analytics agent - custom logs**](connect-custom-logs.md) <br><br>[Extra configuration for Alsid](#extra-configuration-for-alsid)| | **Log Analytics table(s)** | AlsidForADLog_CL |
-| **Custom log sample file:** | https://github.com/Azure/azure-quickstart-templates/blob/master/alsid-syslog-proxy/logs/AlsidForAD.log |
| **Kusto function alias:** | afad_parser | | **Kusto function URL:** | https://aka.ms/sentinel-alsidforad-parser | | **Supported by** | [Alsid](https://www.alsid.com/contact-us/) |
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/dns-normalization-schema.md
imDNS | where SrcIpAddr != "127.0.0.1" and EventSubType == "response"
## Parsers
-### Available parsers
+### Source-agnostic parsers
-The KQL functions implementing the DNS information model have the following names:
+To use the source-agnostic parsers that unify all of the built-in parsers, and ensure that your analysis runs across all the configured sources, use the following KQL functions as the table name in your query:
| Name | Description | Usage instructions | | | | |
The KQL functions implementing the DNS information model have the following name
The parsers can be deployed from the [Azure Sentinel GitHub repository](https://aka.ms/azsentinelDNS).
+### Built-in source-specific parsers
+
+Azure Sentinel provides the following built-in, product-specific DNS parsers:
+
+ - **Microsoft DNS Server**, collected using the Log Analytics Agent - ASimDnsMicrosoftOMS (regular), vimDnsMicrosoftOMS (parametrized)
+ - **Cisco Umbrella** - ASimDnsCiscoUmbrella (regular), vimDnsCiscoUmbrella (parametrized)
+ - **Infoblox NIOS** - ASimDnsInfobloxNIOS (regular), vimDnsInfobloxNIOS (parametrized)
+ - **GCP DNS** - ASimDnsGcp (regular), vimDnsGcp (parametrized)
+ - **Corelight Zeek DNS events** - ASimDnsCorelightZeek (regular), vimDnsCorelightZeek (parametrized)
+
+The parsers can be deployed from the [Azure Sentinel GitHub repository](https://aka.ms/azsentinelDNS).
+
+### Add your own normalized parsers
+
+When implementing custom parsers for the Dns information model, name your KQL functions using the following syntax: `vimDns<vendor><Product` for parametrized parsers and `ASimDns<vendor><Product` for regular parsers.
+ ### Filtering parser parameters The `im` and `vim*` parsers support [filtering parameters](normalization-about-parsers.md#optimized-parsers). While these parsers are optional, they can improve your query performance.
To filter results using a parameter, you must specify the parameter in your pars
## Normalized content
-Support for the DNS ASIM schema also includes support for the following built-in analytics rules with normalized authentication parsers. While links to the Azure Sentinel GitHub repository are provided below as a reference, you can also find these rules in the [Azure Sentinel Analytics rule gallery](detect-threats-built-in.md). Use the linked GitHub pages to copy any relevant hunting queries for the listed rules.
+Support for the DNS ASIM schema also includes support for the following built-in analytics rules with normalized DNS parsers. While links to the Azure Sentinel GitHub repository are provided below as a reference, you can also find these rules in the [Azure Sentinel Analytics rule gallery](detect-threats-built-in.md). Use the linked GitHub pages to copy any relevant hunting queries for the listed rules.
The following built-in analytic rules now work with normalized DNS parsers: - [Excessive NXDOMAIN DNS Queries (Normalized DNS)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimDNS/imDns_ExcessiveNXDOMAINDNSQueries.yaml)
The following fields are generated by Log Analytics for each record, and you can
Event fields are common to all schemas, and describe the activity itself and the reporting device.
-| **Field** | **Class** | **Type** | **Example** | **Discussion** |
-| | | | | |
-| **EventMessage** | Optional | String | | A general message or description, either included in or generated from the record. |
-| **EventCount** | Mandatory | Integer | `1` | The number of events described by the record. <br><br>This value is used when the source supports aggregation and a single record may represent multiple events. <br><br>For other sources, it should be set to **1**. |
-| **EventStartTime** | Mandatory | Date/time | | If the source supports aggregation and the record represents multiple events, use this field to specify the time that the first event was generated. <br><br>In other cases, alias the [TimeGenerated](#timegenerated) field. |
-| **EventEndTime** | | Alias || Alias to the [TimeGenerated](#timegenerated) field. |
-| **EventType** | Mandatory | Enumerated | `lookup` | Indicate the operation reported by the record. <br><Br> For DNS records, this value would be the [DNS op code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). |
-| **EventSubType** | Optional | Enumerated || Either **request** or **response**. For most sources, [only the responses are logged](#guidelines-for-collecting-dns-events), and therefore the value is often **response**. |
-| **EventResult** | Mandatory | Enumerated | `Success` | One of the following values: **Success**, **Partial**, **Failure**, **NA** (Not Applicable).<br> <br>The value may be provided in the source record using different terms, which should be normalized to these values. Alternatively, the source may provide only the [EventResultDetails](#eventresultdetails) field, which should be analyzed to derive the EventResult value.<br> <br>If this record represents a request and not a response, set to **NA**. |
-| <a name=eventresultdetails></a>**EventResultDetails** | Mandatory | Alias | `NXDOMAIN` | Reason or details for the result reported in the **_EventResult_** field. Aliases the [ResponseCodeName](#responsecodename) field.|
-| **EventOriginalUid** | Optional | String | | A unique ID of the original record, if provided by the source. |
-| **EventOriginalType** | Optional | String | `lookup` | The original event type or ID, if provided by the source. |
-| <a name ="eventproduct"></a>**EventProduct** | Mandatory | String | `DNS Server` | The product generating the event. This field may not be available in the source record, in which case it should be set by the parser. |
-| **EventProductVersion** | Optional | String | `12.1` | The version of the product generating the event. This field may not be available in the source record, in which case it should be set by the parser. |
-| **EventVendor** | Mandatory | String | `Microsoft` | The vendor of the product generating the event. This field may not be available in the source record, in which case it should be set by the parser. |
-| **EventSchemaVersion** | Mandatory | String | `0.1.1` | The version of the schema documented here is **0.1.1**. |
-| **EventReportUrl** | Optional | String | | A URL provided in the event for a resource that provides more information about the event. |
-| <a name="dvc"></a>**Dvc** | Mandatory | String | `ContosoDc.Contoso.Azure` | A unique identifier of the device on which the event occurred. <br><br>This field may alias the [DvcId](#dvcid), [DvcHostname](#dvchostname), or [DvcIpAddr](#dvcipaddr) fields. For cloud sources, for which there is no apparent device, use the same value as the [Event Product](#eventproduct) field. |
-| <a name ="dvcipaddr"></a>**DvcIpAddr** | Recommended | IP Address | `45.21.42.12` | The IP Address of the device on which the process event occurred. |
-| <a name ="dvchostname"></a>**DvcHostname** | Recommended | Hostname | `ContosoDc.Contoso.Azure` | The hostname of the device on which the process event occurred. |
-| <a name ="dvcid"></a>**DvcId** | Optional | String || The unique ID of the device on which the process event occurred. <br><br>Example: `41502da5-21b7-48ec-81c9-baeea8d7d669` |
-| <a name=additionalfields></a>**AdditionalFields** | Optional | Dynamic | | If your source provides other information worth preserving, either keep it with the original field names or create the **AdditionalFields** dynamic field, and add to the extra information as key/value pairs. |
-| | | | | |
+| **Field** | **Class** | **Type** | **Discussion** |
+| | | | |
+| **EventMessage** | Optional | String | A general message or description, either included in or generated from the record. |
+| **EventCount** | Mandatory | Integer | The number of events described by the record. <br><br>This value is used when the source supports aggregation and a single record may represent multiple events. <br><br>For other sources, it should be set to **1**. <br><br>Example:`1`|
+| **EventStartTime** | Mandatory | Date/time | If the source supports aggregation and the record represents multiple events, use this field to specify the time that the first event was generated. <br><br>In other cases, alias the [TimeGenerated](#timegenerated) field. |
+| **EventEndTime** | Alias || Alias to the [TimeGenerated](#timegenerated) field. |
+| **EventType** | Mandatory | Enumerated | Indicate the operation reported by the record. <br><Br> For DNS records, this value would be the [DNS op code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `lookup`|
+| **EventSubType** | Optional | Enumerated | Either **request** or **response**. For most sources, [only the responses are logged](#guidelines-for-collecting-dns-events), and therefore the value is often **response**. |
+| **EventResult** | Mandatory | Enumerated | One of the following values: **Success**, **Partial**, **Failure**, **NA** (Not Applicable).<br> <br>The value may be provided in the source record using different terms, which should be normalized to these values. Alternatively, the source may provide only the [EventResultDetails](#eventresultdetails) field, which should be analyzed to derive the EventResult value.<br> <br>If this record represents a request and not a response, set to **NA**. <br><br>Example: `Success`|
+| <a name=eventresultdetails></a>**EventResultDetails** | Alias | | Reason or details for the result reported in the **_EventResult_** field. Aliases the [ResponseCodeName](#responsecodename) field.|
+| **EventOriginalUid** | Optional | String | A unique ID of the original record, if provided by the source. |
+| **EventOriginalType** | Optional | String | The original event type or ID, if provided by the source.<br><br>Example: `lookup` |
+| <a name ="eventproduct"></a>**EventProduct** | Mandatory | String | The product generating the event. This field may not be available in the source record, in which case it should be set by the parser. <br><br>Example: `DNS Server` |
+| **EventProductVersion** | Optional | String | The version of the product generating the event. This field may not be available in the source record, in which case it should be set by the parser. <br><br>Example: `12.1` |
+| **EventVendor** | Mandatory | String | The vendor of the product generating the event. This field may not be available in the source record, in which case it should be set by the parser.<br><br>Example: `Microsoft`|
+| **EventSchemaVersion** | Mandatory | String | The version of the schema documented here is **0.1.2**. |
+| **EventSchema** | Mandatory | String | The name of the schema documented here is **Dns**. |
+| **EventReportUrl** | Optional | String | A URL provided in the event for a resource that provides more information about the event. |
+| <a name="dvc"></a>**Dvc** | Mandatory | String | A unique identifier of the device on which the event occurred. <br><br>This field may alias the [DvcId](#dvcid), [DvcHostname](#dvchostname), or [DvcIpAddr](#dvcipaddr) fields. For cloud sources, for which there is no apparent device, use the same value as the [Event Product](#eventproduct) field. <br><br>Example: `ContosoDc.Contoso.Azure` |
+| <a name ="dvcipaddr"></a>**DvcIpAddr** | Recommended | IP Address | The IP Address of the device on which the process event occurred. <br><br>Example: `45.21.42.12` |
+| <a name ="dvchostname"></a>**DvcHostname** | Recommended | Hostname | The hostname of the device on which the process event occurred. <br><br>Example: `ContosoDc.Contoso.Azure` |
+| <a name ="dvcid"></a>**DvcId** | Optional | String | The unique ID of the device on which the process event occurred. <br><br>Example: `41502da5-21b7-48ec-81c9-baeea8d7d669` |
+| <a name=additionalfields></a>**AdditionalFields** | Optional | Dynamic | If your source provides other information worth preserving, either keep it with the original field names or create the **AdditionalFields** dynamic field, and add to the extra information as key/value pairs. |
+| | | | |
### DNS-specific fields The fields below are specific to DNS events. That said, many of them do have similarities in other schemas and therefore follow the same naming convention.
-| **Field** | **Class** | **Type** | **Example** | **Notes** |
-| | | | | |
-| **SrcIpAddr** | Mandatory | IP Address | `192.168.12.1 `| The IP address of the client sending the DNS request. For a recursive DNS request, this value would typically be the reporting device, and in most cases set to **127.0.0.1**. |
-| **SrcPortNumber** | Optional | Integer | `54312` | Source port of the DNS query. |
-| **DstIpAddr** | Optional | IP Address | `127.0.0.1` | The IP address of the server receiving the DNS request. For a regular DNS request, this value would typically be the reporting device, and in most cases set to **127.0.0.1**. |
-| **DstPortNumber** | Optional | Integer | `53` | Destination Port number |
-| **IpAddr** | | Alias | | Alias for SrcIpAddr |
-| <a name=query></a>**DnsQuery** | Mandatory | FQDN | `www.malicious.com` | The domain that needs to be resolved. <br><br>**Note**: Some sources send this query in different formats. For example, in the DNS protocol itself, the query includes a dot (**.**)at the end, which must be removed.<br><br>While the DNS protocol allows for multiple queries in a single request, this scenario is rare, if it's found at all. If the request has multiple queries, store the first one in this field, and then and optionally keep the rest in the [AdditionalFields](#additionalfields) field. |
-| **Domain** | | Alias || Alias to [Query](#query). |
-| **DnsQueryType** | Optional | Integer | `28` | This field may contain [DNS Resource Record Type codes](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml)). |
-| **DnsQueryTypeName** | Mandatory | Enumerated | `AAAA` | The field may contain [DNS Resource Record Type](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml) names. <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case as needed. If the source provides only a numerical query type code and not a query type name, the parser must include a lookup table to enrich with this value. |
-| <a name=responsename></a>**DnsResponseName** | Optional | String | | The content of the response, as included in the record.<br> <br> The DNS response data is inconsistent across reporting devices, is complex to parse, and has less value for source agnostics analytics. Therefore the information model does not require parsing and normalization, and Azure Sentinel uses an auxiliary function to provide response information. For more information, see [Handling DNS response](#handling-dns-response).|
-| <a name=responsecodename></a>**DnsResponseCodeName** | Mandatory | Enumerated | `NXDOMAIN` | The [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. |
-| **DnsResponseCode** | Optional | Integer | `3` | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).|
-| **TransactionIdHex** | Recommended | String | | The DNS unique hex transaction ID. |
-| **NetworkProtocol** | Optional | Enumerated | `UDP` | The transport protocol used by the network resolution event. The value can be **UDP** or **TCP**, and is most commonly set to **UDP** for DNS. |
-| **DnsQueryClass** | Optional | Integer | | The [DNS class ID](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable.|
-| **DnsQueryClassName** | Optional | String | `"IN"` | The [DNS class name](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable. |
-| <a name=flags></a>**DnsFlags** | Optional | List of strings | `["DR"]` | The flags field, as provided by the reporting device. If flag information is provided in multiple fields, concatenate them with comma as a separator. <br><br>Since DNS flags are complex to parse and are less often used by analytics, parsing and normalization are not required, and Azure Sentinel uses an auxiliary function to provide flags information. For more information, see [Handling DNS response](#handling-dns-response).|
-| <a name=UrlCategory></a>**UrlCategory** | | String | `Educational \\ Phishing` | A DNS event source may also look up the category of the requested Domains. The field is called **_UrlCategory_** to align with the Azure Sentinel network schema. <br><br>**_DomainCategory_** is added as an alias that's fitting to DNS. |
-| **DomainCategory** | | Alias | | Alias to [UrlCategory](#UrlCategory). |
-| **ThreatCategory** | | String | | If a DNS event source also provides DNS security, it may also evaluate the DNS event. For example, it may search for the IP address or domain in a threat intelligence database, and may assign the domain or IP address with a Threat Category. |
-| **EventSeverity** | Optional | String | `"Informational"` | If a DNS event source also provides DNS security, it may evaluate the DNS event. For example, it may search for the IP address or domain in a threat intelligence database, and may assign a severity based on the evaluation. |
-| **DvcAction** | Optional | String | `"Blocked"` | If a DNS event source also provides DNS security, it may take an action on the request, such as blocking it. |
-| | | | | |
+| **Field** | **Class** | **Type** | **Notes** |
+| | | | |
+| **SrcIpAddr** | Mandatory | IP Address | The IP address of the client sending the DNS request. For a recursive DNS request, this value would typically be the reporting device, and in most cases set to `127.0.0.1`.<br><br>Example: `192.168.12.1` |
+| **SrcPortNumber** | Optional | Integer | Source port of the DNS query.<br><br>Example: `54312` |
+| **DstIpAddr** | Optional | IP Address | The IP address of the server receiving the DNS request. For a regular DNS request, this value would typically be the reporting device, and in most cases set to `127.0.0.1`.<br><br>Example: `127.0.0.1` |
+| **DstPortNumber** | Optional | Integer | Destination Port number.<br><br>Example: `53` |
+| **IpAddr** | Alias | | Alias for SrcIpAddr |
+| <a name=query></a>**DnsQuery** | Mandatory | FQDN | The domain that needs to be resolved. <br><br>**Note**: Some sources send this query in different formats. For example, in the DNS protocol itself, the query includes a dot (**.**)at the end, which must be removed.<br><br>While the DNS protocol allows for multiple queries in a single request, this scenario is rare, if it's found at all. If the request has multiple queries, store the first one in this field, and then and optionally keep the rest in the [AdditionalFields](#additionalfields) field.<br><br>Example: `www.malicious.com` |
+| **Domain** | Alias | | Alias to [Query](#query). |
+| **DnsQueryType** | Optional | Integer | This field may contain [DNS Resource Record Type codes](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml)). <br><br>Example: `28`|
+| **DnsQueryTypeName** | Mandatory | Enumerated | The field may contain [DNS Resource Record Type](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml) names. <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case as needed. If the source provides only a numerical query type code and not a query type name, the parser must include a lookup table to enrich with this value.<br><br>Example: `AAAA`|
+| <a name=responsename></a>**DnsResponseName** | Optional | String | The content of the response, as included in the record.<br> <br> The DNS response data is inconsistent across reporting devices, is complex to parse, and has less value for source agnostics analytics. Therefore the information model does not require parsing and normalization, and Azure Sentinel uses an auxiliary function to provide response information. For more information, see [Handling DNS response](#handling-dns-response).|
+| <a name=responsecodename></a>**DnsResponseCodeName** | Mandatory | Enumerated | The [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. <br><br>Example: `NXDOMAIN` |
+| **DnsResponseCode** | Optional | Integer | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `3`|
+| **TransactionIdHex** | Recommended | String | The DNS unique hex transaction ID. |
+| **NetworkProtocol** | Optional | Enumerated | The transport protocol used by the network resolution event. The value can be **UDP** or **TCP**, and is most commonly set to **UDP** for DNS. <br><br>Example: `UDP`|
+| **DnsQueryClass** | Optional | Integer | The [DNS class ID](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable.|
+| **DnsQueryClassName** | Optional | String | The [DNS class name](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable. <br><br>Example: `IN`|
+| <a name=flags></a>**DnsFlags** | Optional | List of strings | The flags field, as provided by the reporting device. If flag information is provided in multiple fields, concatenate them with comma as a separator. <br><br>Since DNS flags are complex to parse and are less often used by analytics, parsing and normalization are not required, and Azure Sentinel uses an auxiliary function to provide flags information. For more information, see [Handling DNS response](#handling-dns-response). <br><br>Example: `["DR"]`|
+| <a name=UrlCategory></a>**UrlCategory** | Optional | String | A DNS event source may also look up the category of the requested Domains. The field is called **_UrlCategory_** to align with the Azure Sentinel network schema. <br><br>**_DomainCategory_** is added as an alias that's fitting to DNS. <br><br>Example: `Educational \\ Phishing` |
+| **DomainCategory** | Optional | Alias | Alias to [UrlCategory](#UrlCategory). |
+| **ThreatCategory** | Optional | String | If a DNS event source also provides DNS security, it may also evaluate the DNS event. For example, it may search for the IP address or domain in a threat intelligence database, and may assign the domain or IP address with a Threat Category. |
+| **EventSeverity** | Optional | String | If a DNS event source also provides DNS security, it may evaluate the DNS event. For example, it may search for the IP address or domain in a threat intelligence database, and may assign a severity based on the evaluation. <br><br>Example: `Informational`|
+| **DvcAction** | Optional | String | If a DNS event source also provides DNS security, it may take an action on the request, such as blocking it. <br><br>Example: `Blocked` |
+| **DnsFlagsAuthenticated** | Optional | Boolean | The DNS `AD` flag, which is related to DNSSEC, indicates in a response that all data included in the answer and authority sections of the response have been verified by the server according to the policies of that server. see [RFC 3655 Section 6.1](https://tools.ietf.org/html/rfc3655#section-6.1) for more information. |
+| **DnsFlagsAuthoritative** | Optional | Boolean | The DNS `AA` flag indicates whether the response from the server was authoritative |
+| **DnsFlagsCheckingDisabled** | Optional | Boolean | The DNS `CD` flag, which is related to DNSSEC, indicates in a query that non-verified data is acceptable to the system sending the query. see [RFC 3655 Section 6.1](https://tools.ietf.org/html/rfc3655#section-6.1) for more information. |
+| **DnsFlagsRecursionAvailable** | Optional | Boolean | The DNS `RA` flag indicates in a response that that server supports recursive queries. |
+| **DnsFlagsRecursionDesired** | Optional | Boolean | The DNS `RD` flag indicates in a request that that client would like the server to use recursive queries. |
+| **DnsFlagsTruncates** | Optional | Boolean | The DNS `TC` flag indicates that a response was truncates as it exceeded the maximum response size. |
+| **DnsFlagsZ** | Optional | Boolean | The DNS `Z` flag is a deprecated DNS flag, which might be reported by older DNS systems. |
+| | | | |
### Deprecated aliases
-The following fields are aliases that are currently deprecated but are maintained for backwards compatibility:
+The following fields are aliases that, although currently deprecated, are maintained for backwards compatibility. They will be removed from the schema on December 31st, 2021.
- Query (alias to DnsQuery) - QueryType (alias to DnsQueryType)
The following fields are aliases that are currently deprecated but are maintaine
- QueryClassName (alias to DnsQueryClassName) - Flags (alias to DnsFlags)
+### Added fields
+
+The following fields have been added to version 0.1.2 of the schema:
+- EventSchema - currently optional, but will become mandatory on January 1st, 2022.
+- Dedicated flag field augmenting the combined **[Flags](#flags)** field: `DnsFlagsAuthoritative`, `DnsFlagsCheckingDisabled`, `DnsFlagsRecursionAvailable`, `DnsFlagsRecursionDesired`, `DnsFlagsTruncates`, and `DnsFlagsZ`.
+ ### Additional entities Events evolve around entities such as users, hosts, process, or files, and each entity may require several fields to describe. For example:
The fields in each dictionary in the dynamic value correspond to the fields in e
## Handling DNS flags
-Parsing and normalization are not required for flag data. Instead, store the flag data provided by the reporting device in the [Flags](#flags) field.
+Parsing and normalization are not required for flag data. Instead, store the flag data provided by the reporting device in the [Flags](#flags) field. If determining the value of individual flags is straight forward, you can also use the dedicated flags fields.
-You can also provide an extra KQL function called `_imDNS<vendor>Flags_`, which takes the unparsed response as input and returns a dynamic list, with Boolean values that represent each flag in the following order:
+You can also provide an extra KQL function called `_imDNS<vendor>Flags_`, which takes the unparsed response, or dedicated flag fields, as input and returns a dynamic list, with Boolean values that represent each flag in the following order:
- Authenticated (AD) - Authoritative (AA)
sentinel File Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/file-event-normalization-schema.md
When implementing custom parsers for the File Event information model, name your
Add your KQL function to the `imFileEvent` source-agnostic parser to ensure that any content using the File Event model also uses your new parser.
-## Normalized content for process activity data
+## Normalized content for file activity data
-Support for the File Activity ASIM schema also includes support for the following built-in analytics rules with normalized authentication parsers. While links to the Azure Sentinel GitHub repository are provided below as a reference, you can also find these rules in the [Azure Sentinel Analytics rule gallery](detect-threats-built-in.md). Use the linked GitHub pages to copy any relevant hunting queries for the listed rules.
+Support for the File Activity ASIM schema also includes support for the following built-in analytics rules with normalized file activity parsers. While links to the Azure Sentinel GitHub repository are provided below as a reference, you can also find these rules in the [Azure Sentinel Analytics rule gallery](detect-threats-built-in.md). Use the linked GitHub pages to copy any relevant hunting queries for the listed rules.
- [SUNBURST and SUPERNOVA backdoor hashes (Normalized File Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimFileEvent/imFileESolarWindsSunburstSupernova.yaml)
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/network-normalization-schema.md
The network normalization schema can represent any IP network session, but is sp
The following sections provide guidance on normalizing and using the schema for the different source types. Each source type may: - Support additional fields from the auxiliary field lists: [Intermediary device fields](#Intermediary), [HTTP Session fields](#http-session-fields), and [Inspection fields](#inspection-fields). Some fields might be mandatory only in the context of the specific source type. - Allow source type specific values to common event fields such as `EventType` and and `EventResult`.-- Support, in addition to the `imNetworkSession` parser, also either the `imWebSession` or `imNotable` parser, or both.
+- Support, in addition to the `imNetworkSession` parser, also either the `imWebSession` or `inNetworkNotable` parser, or both.
### Netflow log sources
The following sections provide guidance on normalizing and using the schema for
| Task | Description | | | |
-| **Normalize firewall events** | To normalize events from firewalls, map relevant events to [event](#event-fields), [network session](#network-session-fields), and [session inspection](#inspection-fields) fields. Filter those events and add them to the [imNotables](#use-parsers) source-agnostic parser. |
-| **Use Firewall Events** | Firewall events are surfaced as part of the [imNetworkSession](#use-parsers) source-agnostic parser. Relevant events, identified by the firewall inspection engines, are also surfaced as part of the [imNotables](#use-parsers) source-agnostic parsers. |
+| **Normalize firewall events** | To normalize events from firewalls, map relevant events to [event](#event-fields), [network session](#network-session-fields), and [session inspection](#inspection-fields) fields. Filter those events and add them to the [inNetworkNotables](#use-parsers) source-agnostic parser. |
+| **Use Firewall Events** | Firewall events are surfaced as part of the [imNetworkSession](#use-parsers) source-agnostic parser. Relevant events, identified by the firewall inspection engines, are also surfaced as part of the [inNetworkNotables](#use-parsers) source-agnostic parsers. |
| | | ### Intrusion Prevention Systems (IPS) log sources | Task | Description | | | |
-| **Normalize IPS events** | To normalize events from intrusion prevention systems, map [event fields](#event-fields), [network session fields](#network-session-fields), and [session inspection fields](#inspection-fields). Make sure to include your source-specific parser in both both the [imNetworkSession](#use-parsers) and [imNotables](#use-parsers) source-agnostic parsers. |
-| **Use IPS events** | IPS events are surfaced as part of the [imNetworkSession](#use-parsers) and [imNotables](#use-parsers) source-agnostic parsers. |
+| **Normalize IPS events** | To normalize events from intrusion prevention systems, map [event fields](#event-fields), [network session fields](#network-session-fields), and [session inspection fields](#inspection-fields). Make sure to include your source-specific parser in both both the [imNetworkSession](#use-parsers) and [inNetworkNotables](#use-parsers) source-agnostic parsers. |
+| **Use IPS events** | IPS events are surfaced as part of the [imNetworkSession](#use-parsers) and [inNetworkNotables](#use-parsers) source-agnostic parsers. |
| | | ### Web servers
The following sections provide guidance on normalizing and using the schema for
| Task | Description | | | |
-| **Normalize Web Security Gateways Events** | To normalize events from a web server gateway, map [event fields](#event-fields), [network session fields](#network-session-fields), [HTTP session fields](#http-session-fields), [session inspection fields](#inspection-fields), and optionally the intermediary fields. <br><br>Make sure to set the `EventType` to `HTTP`, and follow HTTP session-specific guidance for the `EventResult` and `EventResultDetails` fields. <br><br>Make sure you include your source-specific parser in both the [imNetworkSession](#use-parsers) and [imWebSession](#use-parsers) source-agnostic parsers. Also, filter any events detected by the inspection engine and add them to the [imNotables](#use-parsers) source-agnostic parser. |
-| **Use Web Security Gateways Events** | Web Server events are surfaced as part of the [imNetworkSession](#use-parsers) source-agnostic parser. <br><br>- To use any HTTP-specific fields, use the [imWebSession](#use-parsers) parser.<br>- To analyze detected sessions, use the [imNotables](#use-parsers) source-agnostic parser. |
+| **Normalize Web Security Gateways Events** | To normalize events from a web server gateway, map [event fields](#event-fields), [network session fields](#network-session-fields), [HTTP session fields](#http-session-fields), [session inspection fields](#inspection-fields), and optionally the intermediary fields. <br><br>Make sure to set the `EventType` to `HTTP`, and follow HTTP session-specific guidance for the `EventResult` and `EventResultDetails` fields. <br><br>Make sure you include your source-specific parser in both the [imNetworkSession](#use-parsers) and [imWebSession](#use-parsers) source-agnostic parsers. Also, filter any events detected by the inspection engine and add them to the [inNetworkNotables](#use-parsers) source-agnostic parser. |
+| **Use Web Security Gateways Events** | Web Server events are surfaced as part of the [imNetworkSession](#use-parsers) source-agnostic parser. <br><br>- To use any HTTP-specific fields, use the [imWebSession](#use-parsers) parser.<br>- To analyze detected sessions, use the [inNetworkNotables](#use-parsers) source-agnostic parser. |
| | |
To use a source-agnostic parser that unifies all built-in parsers, and ensure th
- **imNetworkSession**, for all network sessions - **imWebSession**, for HTTP sessions, typically reported by web servers, web proxies, and web security gateways-- **imNotables**, for sessions detected by a detection engine, usually as suspicious. Notable events are typically reported by intrusion prevention systems, firewalls, and web security gateways.
+- **inNetworkNotables**, for sessions detected by a detection engine, usually as suspicious. Notable events are typically reported by intrusion prevention systems, firewalls, and web security gateways.
Deploy the [source-agnostic and source-specific parsers](normalization-about-parsers.md) from the [Azure Sentinel GitHub repository](https://aka.ms/AzSentinelNetworkSession).
The following fields are common to all network session activity logging:
| <a name="srcuserid"></a>**SrcUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the source user. Format and supported types include:<br>- **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br>- **UID** (Linux): `4578`<br>- **AADID** (Azure Active Directory): `9267d02c-5f76-40a9-a9eb-b686f3ca47aa`<br>- **OktaId**: `00urjk4znu3BcncfY0h7`<br>- **AWSId**: `72643944673`<br><br>Store the ID type in the [SrcUserIdType](#srcuseridtype) field. If other IDs are available, we recommend that you normalize the field names to SrcUserSid, SrcUserUid, SrcUserAadId, SrcUserOktaId and UserAwsId, respectively.For more information, see The User entity.<br><br>Example: S-1-12 | | <a name="srcuseridtype"></a>**SrcUserIdType** | Optional | Enumerated | The type of the ID stored in the [SrcUserId](#srcuserid) field. Supported values include: `SID`, `UIS`, `AADID`, `OktaId`, and `AWSId`. | | <a name="srcusername"></a>**SrcUsername** | Optional | String | The Source username, including domain information when available. Use one of the following formats and in the following order of priority:<br>- **Upn/Email**: `johndow@contoso.com`<br>- **Windows**: `Contoso\johndow`<br>- **DN**: `CN=Jeff Smith,OU=Sales,DC=Fabrikam,DC=COM`<br>- **Simple**: `johndow`. Use the Simple form only if domain information is not available.<br><br>Store the Username type in the [SrcUsernameType](#srcusernametype) field. If other IDs are available, we recommend that you normalize the field names to **SrcUserUpn**, **SrcUserWindows** and **SrcUserDn**.<br><br>For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `AlbertE` |
-| **User** | Alias | | Alias to [SrcUsername](#srcusername) |
| <a name="srcusernametype"></a>**SrcUsernameType** | Optional | Enumerated | Specifies the type of the username stored in the [SrcUsername](#srcusername) field. Supported values are: `UPN`, `Windows`, `DN`, and `Simple`. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `Windows` | | **SrcUserType** | Optional | Enumerated | The type of Actor. Allowed values are:<br>- `Regular`<br>- `Machine`<br>- `Admin`<br>- `System`<br>- `Application`<br>- `Service Principal`<br>- `Other`<br><br>**Note**: The value may be provided in the source record using different terms, which should be normalized to these values. Store the original value in the [EventOriginalSeverity](#eventoriginalseverity) field. | | **SrcOriginalUserType** | | | The original source user type, if provided by the source. |
sentinel Normalization Schema V1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-schema-v1.md
The network normalization schema is used to describe reported network events, an
For more information, see [Normalization and the Azure Sentinel Information Model (ASIM)](normalization.md). > [!IMPORTANT]
-> This article relates to version 0.1 of the network normalization schema, which was released as a preview before ASIM was available. Version 0.2 of the network normalization schema aligns with ASIM and provides other enhancements.
+> This article relates to version 0.1 of the network normalization schema, which was released as a preview before ASIM was available. [Version 0.2](network-normalization-schema.md) of the network normalization schema aligns with ASIM and provides other enhancements.
> > For more information, see [Differences between network normalization schema versions](#changes) >
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/anonymous-read-access-configure.md
$location = "<location>"
New-AzStorageAccount -ResourceGroupName $rgName ` -Name $accountName ` -Location $location `
- -SkuName Standard_GRS
+ -SkuName Standard_GRS `
-AllowBlobPublicAccess $false # Read the AllowBlobPublicAccess property for the newly created storage account.
storage Data Lake Storage Explorer Acl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-explorer-acl.md
When you first start Storage Explorer, the **Microsoft Azure Storage Explorer -
Select **Add an Azure Account** and click **Sign in..**. Follow the on-screen prompts to sign into your Azure account.
-![Screenshot that shows Microsoft Azure Storage Explorer, and highlights the Add an Azure Account option and the Sign in button.](media/storage-quickstart-blobs-storage-explorer/connect.png)
+![Screenshot that shows Microsoft Azure Storage Explorer, and highlights the Add an Azure Account option and the Sign in button.](media/quickstart-storage-explorer/storage-explorer-connect.png)
When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [Cosmos DB](../../cosmos-db/storage-explorer.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) accounts, or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
-![Microsoft Azure Storage Explorer - Connect window](media/storage-quickstart-blobs-storage-explorer/mainpage.png)
+![Microsoft Azure Storage Explorer - Connect window](media/quickstart-storage-explorer/storage-explorer-main-page.png)
## Manage an ACL
storage Data Lake Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-explorer.md
When you first start Storage Explorer, the **Microsoft Azure Storage Explorer -
Select **Add an Azure Account** and click **Sign in..**. Follow the on-screen prompts to sign into your Azure account.
-![Screenshot that shows Microsoft Azure Storage Explorer, and highlights the Add an Azure Account option and the Sign in button.](media/storage-quickstart-blobs-storage-explorer/connect.png)
+![Screenshot that shows Microsoft Azure Storage Explorer, and highlights the Add an Azure Account option and the Sign in button.](media/quickstart-storage-explorer/storage-explorer-connect.png)
When it completes connecting, Azure Storage Explorer loads with the **Explorer** tab shown. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [Cosmos DB](../../cosmos-db/storage-explorer.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) accounts, or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
-![Microsoft Azure Storage Explorer - Connect window](media/storage-quickstart-blobs-storage-explorer/mainpage.png)
+![Microsoft Azure Storage Explorer - Connect window](media/quickstart-storage-explorer/storage-explorer-main-page.png)
## Create a container
storage Immutable Policy Configure Version Scope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/immutable-policy-configure-version-scope.md
Previously updated : 08/31/2021 Last updated : 09/10/2021
To configure a time-based retention policy on a previous version of a blob, foll
To configure a time-based retention policy on a blob version with PowerShell, call the **Set-AzStorageBlobImmutabilityPolicy** command.
+The following example shows how to configure an unlocked policy on the current version of a blob. Remember to replace placeholders in angle brackets with your own values:
+ ```azurepowershell # Get the storage account context $ctx = (Get-AzStorageAccount `
Set-AzStorageBlobImmutabilityPolicy -Container <container> `
### [Azure CLI](#tab/azure-cli)
-N/A
+To configure a time-based retention policy on a blob version with Azure CLI, you must first install the *storage-blob-preview* extension, version 0.6.1 or later.
+
+```azurecli
+az extension add --name storage-blob-preview
+```
+
+For more information about installing Azure CLI extensions, see [How to install and manage Azure CLI extensions](/cli/azure/azure-cli-extensions-overview).
+
+Next, call the **az storage blob immutability-policy set** command to configure the time-based retention policy. The following example shows how to configure an unlocked policy on the current version of a blob. Remember to replace placeholders in angle brackets with your own values:
+
+```azurecli
+az storage blob immutability-policy set \
+ --expiry-time 2021-09-20T08:00:00Z \
+ --policy-mode Unlocked \
+ --container <container> \
+ --name <blob-version> \
+ --account-name <storage-account> \
+ --auth-mode login
+```
To delete the unlocked policy, select **Delete** from the **More** menu.
### [PowerShell](#tab/azure-powershell)
-To modify an unlocked time-based retention policy with PowerShell, call the **Set-AzStorageBlobImmutabilityPolicy** command on the blob version with the new date and time for the policy expiration.
+To modify an unlocked time-based retention policy with PowerShell, call the **Set-AzStorageBlobImmutabilityPolicy** command on the blob version with the new date and time for the policy expiration. Remember to replace placeholders in angle brackets with your own values:
```azurepowershell $containerName = "<container>"
$blobVersion = $blobVersion | Remove-AzStorageBlobImmutabilityPolicy
#### [Azure CLI](#tab/azure-cli)
-N/A
+To modify an unlocked time-based retention policy with PowerShell, call the **az storage blob immutability-policy set** command on the blob version with the new date and time for the policy expiration. Remember to replace placeholders in angle brackets with your own values:
+
+```azurecli
+az storage blob immutability-policy set \
+ --expiry-time 2021-10-018:00:00Z \
+ --policy-mode Unlocked \
+ --container <container> \
+ --name <blob-version> \
+ --account-name <storage-account> \
+ --auth-mode login
+```
+
+To delete an unlocked retention policy, call the **az storage blob immutability-policy delete** command.
+
+```azurecli
+az storage blob immutability-policy delete \
+ --container <container> \
+ --name <blob-version> \
+ --account-name <storage-account> \
+ --auth-mode login
+```
$blobVersion = $blobVersion |
### [Azure CLI](#tab/azure-cli)
-N/A
+To lock a policy with PowerShell, call the **az storage blob immutability-policy set** command and set the `--policy-mode` parameter to *Locked*. You can also change the expiry at the time that you lock the policy.
+
+```azurecli
+az storage blob immutability-policy set \
+ --expiry-time 2021-10-018:00:00Z \
+ --policy-mode Locked \
+ --container <container> \
+ --name <blob-version> \
+ --account-name <storage-account> \
+ --auth-mode login
+```
Set-AzStorageBlobLegalHold -Container <container> `
#### [Azure CLI](#tab/azure-cli)
-N/A
+To configure or clear a legal hold on a blob version with Azure CLI, call the **az storage blob set-legal-hold** command.
+
+```azurecli
+# Set a legal hold
+az storage blob set-legal-hold \
+ --legal-hold \
+ --container <container> \
+ --name <blob-version> \
+ --account-name <account-name> \
+ --auth-mode login
+
+# Clear a legal hold
+az storage blob set-legal-hold \
+ --legal-hold false \
+ --container <container> \
+ --name <blob-version> \
+ --account-name <account-name> \
+ --auth-mode login
+```
storage Quickstart Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/quickstart-storage-explorer.md
+
+ Title: Quickstart - Create a blob with Azure Storage Explorer
+
+description: Learn how to use Azure Storage Explorer to create a container and a blob, download the blob to your local computer, and view all of the blobs in the container.
++++++ Last updated : 09/10/2021+++
+# Quickstart: Use Azure Storage Explorer to create a blob
+
+In this quickstart, you learn how to use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to create a container and a blob. Next, you learn how to download the blob to your local computer, and how to view all of the blobs in a container. You also learn how to create a snapshot of a blob, manage container access policies, and create a shared access signature.
+
+## Prerequisites
++
+This quickstart requires that you install Azure Storage Explorer. To install Azure Storage Explorer for Windows, Macintosh, or Linux, see [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/).
+
+## Log in to Storage Explorer
+
+On first launch, the **Microsoft Azure Storage Explorer - Connect** window is shown. Storage Explorer provides several ways to connect to storage accounts. The following table lists the different ways you can connect:
+
+|Task|Purpose|
+|||
+|Add an Azure Account | Redirects you to your organization's sign-in page to authenticate you to Azure. |
+|Use a connection string or shared access signature URI | Can be used to directly access a container or storage account with a SAS token or a shared connection string. |
+|Use a storage account name and key| Use the storage account name and key of your storage account to connect to Azure storage.|
+
+Select **Add an Azure Account** and click **Sign in..**. Follow the on-screen prompts to sign into your Azure account.
++
+After Storage Explorer finishes connecting, it displays the **Explorer** tab. This view gives you insight to all of your Azure storage accounts as well as local storage configured through the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [Cosmos DB](../../cosmos-db/storage-explorer.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) accounts, or [Azure Stack](/azure-stack/user/azure-stack-storage-connect-se?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) environments.
++
+## Create a container
+
+To create a container, expand the storage account you created in the proceeding step. Select **Blob Containers**, right-click and select **Create Blob Container**. Enter the name for your blob container. See the [Create a container](storage-quickstart-blobs-dotnet.md#create-a-container) section for a list of rules and restrictions on naming blob containers. When complete, press **Enter** to create the blob container. Once the blob container has been successfully created, it is displayed under the **Blob Containers** folder for the selected storage account.
+
+## Upload blobs to the container
+
+Blob storage supports block blobs, append blobs, and page blobs. VHD files used to back IaaS VMs are page blobs. Append blobs are used for logging, such as when you want to write to a file and then keep adding more information. Most files stored in Blob storage are block blobs.
+
+On the container ribbon, select **Upload**. This operation gives you the option to upload a folder or a file.
+
+Choose the files or folder to upload. Select the **blob type**. Acceptable choices are **Append**, **Page**, or **Block** blob.
+
+If uploading a .vhd or .vhdx file, choose **Upload .vhd/.vhdx files as page blobs (recommended)**.
+
+In the **Upload to folder (optional)** field either a folder name to store the files or folders in a folder under the container. If no folder is chosen, the files are uploaded directly under the container.
++
+When you select **OK**, the files selected are queued to upload, each file is uploaded. When the upload is complete, the results are shown in the **Activities** window.
+
+## View blobs in a container
+
+In the **Azure Storage Explorer** application, select a container under a storage account. The main pane shows a list of the blobs in the selected container.
++
+## Download blobs
+
+To download blobs using **Azure Storage Explorer**, with a blob selected, select **Download** from the ribbon. A file dialog opens and provides you the ability to enter a file name. Select **Save** to start the download of a blob to the local location.
+
+## Manage snapshots
+
+Azure Storage Explorer provides the capability to take and manage [snapshots](./snapshots-overview.md) of your blobs. To take a snapshot of a blob, right-click the blob and select **Create Snapshot**. To view snapshots for a blob, right-click the blob and select **Manage Snapshots**. A list of the snapshots for the blob are shown in the current tab.
++
+## Generate a shared access signature
+
+You can use Storage Explorer to generate a shared access signatures (SAS). Right-click a storage account, container, or blob and choose **Get Shared Access Signature...**. Choose the start and expiry time, and permissions for the SAS URL and select **Create**. Storage Explorer generates the SAS token with the parameters you specified and displays it for copying.
++
+When you create a SAS for a storage account, Storage Explorer generates an account SAS. For more information about the account SAS, see [Create an account SAS](/rest/api/storageservices/create-account-sas).
+
+When you create a SAS for a container or blob, Storage Explorer generates a service SAS. For more information about the service SAS, see [Create a service SAS](/rest/api/storageservices/create-service-sas).
+
+> [!NOTE]
+> When you create a SAS with Storage Explorer, the SAS is always assigned with the storage account key. Storage Explorer does not currently support creating a user delegation SAS, which is a SAS that is signed with Azure AD credentials.
+
+## Next steps
+
+In this quickstart, you learned how to transfer files between a local disk and Azure Blob storage using **Azure Storage Explorer**. To learn more about working with Blob storage, continue to the Blob storage overview.
+
+> [!div class="nextstepaction"]
+> [Introduction to Azure Blob Storage](./storage-blobs-introduction.md)
synapse-analytics Intellij Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/intellij-tool-synapse.md
After creating a Scala application, you can remotely run it.
|Main class name|The default value is the main class from the selected file. You can change the class by selecting the ellipsis(**...**) and choosing another class.| |Job configurations|You can change the default key and values. For more information, see [Apache Livy REST API](http://livy.incubator.apache.org./docs/latest/rest-api.html).| |Command-line arguments|You can enter arguments separated by space for the main class if needed.|
- |Referenced Jars and Referenced Files|You can enter the paths for the referenced Jars and files if any. You can also browse files in the Azure virtual file system, which currently only supports ADLS Gen2 cluster. For more information: [Apache Spark Configuration]https://spark.apache.org/docs/2.4.5/configuration.html#runtime-environment) and [How to upload resources to cluster](../../storage/blobs/storage-quickstart-blobs-storage-explorer.md).|
+ |Referenced Jars and Referenced Files|You can enter the paths for the referenced Jars and files if any. You can also browse files in the Azure virtual file system, which currently only supports ADLS Gen2 cluster. For more information: [Apache Spark Configuration](https://spark.apache.org/docs/2.4.5/configuration.html#runtime-environment) and [How to upload resources to cluster](../../storage/blobs/quickstart-storage-explorer.md).|
|Job Upload Storage|Expand to reveal additional options.| |Storage Type|Select **Use Azure Blob to upload** or **Use cluster default storage account to upload** from the drop-down list.| |Storage Account|Enter your storage account.|