Updates from: 01/12/2022 02:07:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/access-tokens.md
If the **response_type** parameter in an `/authorize` request includes `token`,
## Request a token
-To request an access token, you need an authorization code. Below is an example of a request to the `/authorize` endpoint for an authorization code. Custom domains are not supported for use with access tokens. Use your tenant-name.onmicrosoft.com domain in the request URL.
+To request an access token, you need an authorization code. Below is an example of a request to the `/authorize` endpoint for an authorization code.
In the following example, you replace these values in the query string: -- `<tenant-name>` - The name of your Azure AD B2C tenant.
+- `<tenant-name>` - The name of your [Azure AD B2C tenant](tenant-management.md#get-your-tenant-name). If you're using a custom domain, replace `tenant-name.b2clogin.com` with your domain, such as `contoso.com`.
- `<policy-name>` - The name of your custom policy or user flow. - `<application-ID>` - The application identifier of the web application that you registered to support the user flow. - `<application-ID-URI>` - The application identifier URI that you set under **Expose an API** blade of the client application.
active-directory-b2c Claim Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/claim-resolver-overview.md
Previously updated : 12/12/2021 Last updated : 1/11/2022
The following table lists the [OAuth2 identity provider](oauth2-technical-profil
| Claim | Description | Example | | -- | -- | --|
-| {oauth2:access_token} | The access token. | N/A |
-| {oauth2:refresh_token} | The refresh token. | N/A |
+| {oauth2:access_token} | The OAuth2 identity provider access token. The `access_token` attribute. | `eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1Ni...` |
+| {oauth2:token_type} | The type of the access token. The `token_type` attribute. | Bearer |
+| {oauth2:expires_in} | The length of time that the access token is valid in seconds. The `expires_in` attribute. The output claim [DataType](claimsschema.md#datatype) must be `int` or `long`. | 960000 |
+| {oauth2:refresh_token} | The OAuth2 identity provider refresh token. The `refresh_token` attribute. | `eyJraWQiOiJacW9pQlp2TW5pYVc2MUY...` |
+
+To use the OAuth2 identity provider claim resolvers, set the output claim's `PartnerClaimType` attribute to the claim resolver. The following example demonstrates how the get the external identity provider claims:
+
+```xml
+<ClaimsProvider>
+ <DisplayName>Contoso</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="Contoso-OAUTH">
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="identityProviderAccessToken" PartnerClaimType="{oauth2:access_token}" />
+ <OutputClaim ClaimTypeReferenceId="identityProviderAccessTokenType" PartnerClaimType="{oauth2:token_type}" />
+ <OutputClaim ClaimTypeReferenceId="identityProviderAccessTokenExpiresIn" PartnerClaimType="{oauth2:expires_in}" />
+ <OutputClaim ClaimTypeReferenceId="identityProviderRefreshToken" PartnerClaimType="{oauth2:refresh_token}" />
+ </OutputClaims>
+ ...
+ </TechnicalProfile>
+ </TechnicalProfiles>
+</ClaimsProvider>
+```
+ ## Using claim resolvers
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
Previously updated : 09/20/2021 Last updated : 01/11/2022
The **OutputClaimsTransformations** element may contain a collection of **Output
| IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | |SingleLogoutEnabled| No| Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-behavior.md#sign-out). Possible values: `true` (default), or `false`.| |ForceAuthN| No| Passes the ForceAuthN value in the SAML authentication request to determine if the external SAML IDP will be forced to prompt the user for authentication. By default, Azure AD B2C sets the ForceAuthN value to false on initial login. If the session is then reset (for example by using the `prompt=login` in OIDC) then the ForceAuthN value will be set to `true`. Setting the metadata item as shown below will force the value for all requests to the external IDP. Possible values: `true` or `false`.|
+|ProviderName| No| Passes the ProviderName value in the SAML authentication request.|
## Cryptographic keys
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Returns a date/time string representing a date to which a specified time interva
| **value** |Required | Number | The number of units you want to add. It can be positive (to get dates in the future) or negative (to get dates in the past). | | **dateTime** |Required | DateTime | DateTime representing date to which the interval is added. |
+When passing a date string as input use [CDate](#cdate) function to wrap the datetime string. To get system time in UTC use the [Now](#now) function.
+ The **interval** string must have one of the following values: * yyyy Year * m Month
The **interval** string must have one of the following values:
* n Minute * s Second
-**Example 1: Add 7 days to hire date**
+**Example 1: Generate a date value based on incoming StatusHireDate from Workday** <br>
`DateAdd("d", 7, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 3/23/2012 7:00:00 AM
-
-**Example 2: Get a date ten days prior to hire date**
-`DateAdd("d", -10, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 3/6/2012 7:00:00 AM
-
-**Example 3: Add two weeks to hire date**
-`DateAdd("ww", 2, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 3/30/2012 7:00:00 AM
-
-**Example 4: Add ten months to hire date**
-`DateAdd("m", 10, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 1/16/2013 7:00:00 AM
-
-**Example 5: Add two years to hire date**
-`DateAdd("yyyy", 2, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 3/16/2014 7:00:00 AM
+
+| Example | interval | value | dateTime (value of variable StatusHireDate) | output |
+| | | | | |
+| Add 7 days to hire date | "d" | 7 | 2012-03-16-07:00 | 3/23/2012 7:00:00 AM |
+| Get a date ten days prior to hire date | "d" | -10 | 2012-03-16-07:00 | 3/6/2012 7:00:00 AM |
+| Add two weeks to hire date | "ww" | 2 | 2012-03-16-07:00 | 3/30/2012 7:00:00 AM |
+| Add ten months to hire date | "m" | 10 | 2012-03-16-07:00 | 1/16/2013 7:00:00 AM |
+| Add two years to hire date | "yyyy" | 10 | 2012-03-16-07:00 | 3/16/2014 7:00:00 AM |
+ ### DateDiff **Function:**
This function uses the *interval* parameter to return a number that indicates th
| **date1** |Required | DateTime | DateTime representing a valid date. | | **date2** |Required | DateTime | DateTime representing a valid date. |
+When passing a date string as input use [CDate](#cdate) function to wrap the datetime string. To get system time in UTC use the [Now](#now) function.
+ The **interval** string must have one of the following values: * yyyy Year * m Month
The IIF function returns one of a set of possible values based on a specified co
| **valueIfTrue** |Required |Variable or String | If the condition evaluates to true, the returned value. | | **valueIfFalse** |Required |Variable or String |If the condition evaluates to false, the returned value.|
-**Example:**
+The following comparison operators can be used in the *condition*:
+* Equal to (=) and not equal to (<>)
+* Greater than (>) and greater than equal to (>=)
+* Less than (<) and less than equal to (<=)
+
+**Example:** Set the target attribute value to source country attribute if country="USA", else set target attribute value to source department attribute.
`IIF([country]="USA",[country],[department])`
+#### Known limitations and workarounds for IIF function
+* The IIF function currently does not support AND and OR logical operators.
+* To implement AND logic, use nested IIF statement chained along the *trueValue* path.
+ Example: If country="USA" and state="CA", return value "True", else return "False".
+ `IIF([country]="USA",IIF([state]="CA","True","False"),"False")`
+* To implement OR logic, use nested IIF statement chained along the *falseValue* path.
+ Example: If country="USA" or state="CA", return value "True", else return "False".
+ `IIF([country]="USA","True",IIF([state]="CA","True","False"))`
+* If the source attribute used within the IIF function is empty or null, the condition check fails.
+ * Unsupported IIF expression examples:
+ * `IIF([country]="","Other",[country])`
+ * `IIF(IsNullOrEmpty([country]),"Other",[country])`
+ * `IIF(IsPresent([country]),[country],"Other")`
+ * Recommended workaround: Use the [Switch](#switch) function to check for empty/null values. Example: If country attribute is empty, set value "Other". If it is present, pass the country attribute value to target attribute.
+ * `Switch([country],[country],"","Other")`
+<br>
### InStr **Function:**
active-directory Howto Mfa Reporting Datacollection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-reporting-datacollection.md
Title: Azure AD MFA user data collection - Azure Active Directory
-description: What information is used to help authenticate users by Azure AD Multi-Factor Authentication?
+ Title: Azure AD user data collection - Azure Active Directory
+description: What information is used to help authenticate users by self-service password reset and Azure AD Multi-Factor Authentication?
Previously updated : 11/21/2019 Last updated : 01/07/2021
-# Azure AD Multi-Factor Authentication user data collection
+# Azure AD user data collection for multifactor authentication and self-service password reset
-This document explains how to find user information collected by Azure Multi-Factor Authentication Server (MFA Server) and Azure AD MFA (Cloud-based) in the event you would like to remove it.
+This document explains how to find user information collected by Azure Multi-Factor Authentication Server (MFA Server), Azure AD MFA (Cloud-based), and self-service password reset (SSPR) in the event you would like to remove it.
[!INCLUDE [gdpr-hybrid-note](../../../includes/gdpr-hybrid-note.md)]
-## Information collected
+## MFA information collected
MFA Server, the NPS Extension, and the Windows Server 2016 Azure AD MFA AD FS Adapter collect and store the following information for 90 days.
Use the [Microsoft Privacy Portal](https://portal.azure.com/#blade/Microsoft_Azu
- MFA information is included in the export, which may take hours or days to complete.
-## Delete Data for Azure AD MFA
+## Delete data for Azure AD MFA
Use the [Microsoft Privacy Portal](https://portal.azure.com/#blade/Microsoft_Azure_Policy/UserPrivacyMenuBlade/Overview) to make a request for Account Close to delete all MFA cloud service information collected for this user. - It may take up to 30 days for data to be fully removed.
+## Delete data for self-service password reset
+
+Users can add answers to security questions as part of SSPR. Security questions and answers are hashed to prevent unauthorized access. Only the hashed data is saved, so the security questions and answers can't be exported. Users can go to [My sign-ins](https://mysignins.microsoft.com/security-info) to edit or delete them. The only other information saved for SSPR is the user email address.
+
+Global Administrators can remove data collected for any user. On the **Users** page in Azure AD, click **Authentication methods** and select a user to remove their phone or email address.
+ ## Next steps [MFA Server reporting](howto-mfa-reporting.md)
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
On Windows 7, iOS, Android, and macOS Azure AD identifies the device using a cli
#### Chrome support
-For Chrome support in **Windows 10 Creators Update (version 1703)** or later, install the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji). This extension is required when a Conditional Access policy requires device-specific details.
+For Chrome support in **Windows 10 Creators Update (version 1703)** or later, install the [Windows 10 Accounts](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji) or [Office Online](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb) extensions. These extensions are required when a Conditional Access policy requires device-specific details.
To automatically deploy this extension to Chrome browsers, create the following registry key:
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-policies.md
Previously updated : 10/26/2021 Last updated : 01/11/2022
The behavior of the client apps condition was updated in August 2020. If you hav
This control is used to exclude devices that are hybrid Azure AD joined, or marked a compliant in Intune. This exclusion can be done to block unmanaged devices.
-#### Filters for devices (preview)
+#### Filter for devices
This control allows targeting specific devices based on their attributes in a policy.
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
With Conditional Access, organizations can restrict access to [approved (modern
For more information about the benefits of using app protection policies, see the article [App protection policies overview](/mem/intune/apps/app-protection-policy).
-## Template deployment
+## Create a Conditional Access policy
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+The policies below are put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policies apply as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
-## Create a Conditional Access policy
+### Require approved client apps or app protection policy with mobile devices
-The following steps will help create a Conditional Access policy requiring an approved client app **or** an app protection policy. This policy works in tandem with an [app protection policy created in Microsoft Intune](/mem/intune/apps/app-protection-policies).
+The following steps will help create a Conditional Access policy requiring an approved client app **or** an app protection policy when using an iOS/iPadOS or Android device. This policy will also prevent the use of Exchange ActiveSync clients using basic authentication on mobile devices. This policy works in tandem with an [app protection policy created in Microsoft Intune](/mem/intune/apps/app-protection-policies).
-This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
+Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator. 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
This policy is put in to [Report-only mode](howto-conditional-access-insights-re
1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy. 1. Select **Done**. 1. Under **Cloud apps or actions**, select **All cloud apps**.
- 1. Select **Done**.
1. Under **Conditions** > **Device platforms**, set **Configure** to **Yes**. 1. Under **Include**, **Select device platforms**. 1. Choose **Android** and **iOS**
This policy is put in to [Report-only mode](howto-conditional-access-insights-re
After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+### Block Exchange ActiveSync on all devices
+
+This policy will block all Exchange ActiveSync clients using basic authentication from connecting to Exchange Online.
+
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users and groups**
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy.
+ 1. Select **Done**.
+1. Under **Cloud apps or actions**, select **Select apps**.
+ 1. Select **Office 365 Exchange Online**.
+ 1. Select **Select**.
+1. Under **Conditions** > **Client apps**, set **Configure** to **Yes**.
+ 1. Uncheck all options except **Exchange ActiveSync clients**.
+ 1. Select **Done**.
+1. Under **Access controls** > **Grant**, select **Grant access**.
+ 1. Select **Require app protection policy**
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+ ## Next steps [App protection policies overview](/intune/apps/app-protection-policy)
active-directory Redirect Uris Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/redirect-uris-ios.md
Given the following application registration in the Azure portal:
App1 uses redirect `msauth.com.contoso.app1://auth`.\ App2 uses `msauth.com.contoso.app2://auth`.\
-App3 uses `msauth.com.contoso.app1://auth`.
+App3 uses `msauth.com.contoso.app3://auth`.
### Migrating from ADAL to MSAL
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-primary-refresh-token.md
The following diagrams illustrate the underlying details in issuing, renewing, a
| F | Azure AD validates the Session key signature on the PRT cookie, validates the nonce, verifies that the device is valid in the tenant, and issues an ID token for the web page and an encrypted session cookie for the browser. | > [!NOTE]
-> The Browser SSO flow described in the steps above does not apply for sessions in private modes such as InPrivate in Microsoft Edge, Incognito in Google Chrome (when using the Microsoft Accounts extension) or in private mode in Mozilla Firefox v91+
+> The Browser SSO flow described in the steps above does not apply for sessions in private modes such as InPrivate in Microsoft Edge, Incognito in Google Chrome (when using the Microsoft Accounts or Office Online extensions) or in private mode in Mozilla Firefox v91+
## Next steps
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/tshoot-connect-connectivity.md
na Previously updated : 11/18/2021 Last updated : 01/11/2022
Of these URLs, the following table is the absolute bare minimum to be able to co
| mscrl.microsoft.com |HTTP/80 |Used to download CRL lists. | | \*.verisign.com |HTTP/80 |Used to download CRL lists. | | \*.entrust.net |HTTP/80 |Used to download CRL lists for MFA. |
-| \*.asazure.windows.net (Analysis Services)</br>\*.core.windows.net (Azure Storage)</br>\*.database.windows.net (SQL Server) </br>\*.graph.windows.net (Azure AD Graph)</br>\*.kusto.windows.net (Azure Data Explorer/Kusto)</br>\*.search.windows.net (search)</br>\*.servicebus.windows.net (Azure Service Bus)</br>|HTTPS/443|Used for the various Azure services|
+| \*.management.core.windows.net (Azure Storage)</br>\*.graph.windows.net (Azure AD Graph)|HTTPS/443|Used for the various Azure services|
| secure.aadcdn.microsoftonline-p.com |HTTPS/443 |Used for MFA. | | \*.microsoftonline.com |HTTPS/443 |Used to configure your Azure AD directory and import/export data. | | \*.crl3.digicert.com |HTTP/80 |Used to verify certificates. |
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
The following tutorials provide detailed guidance on implementing some of the mo
The advanced approach provides a more elaborate, yet flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would use this approach for scenarios not covered by the guided configuration templates.
-Refer to the following advanced configuration guides for your integration requirements:
+Refer to the following advanced configuration tutorials for your integration requirements:
- [F5 BIG-IP in Azure deployment walk-through](f5-bigip-deployment-guide.md)
The Guided Configuration wizard, available from BIG-IP version 13.1 aims to mini
The latest version of the Guided Configuration 16.1 now offers an Easy Button feature. With **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, without management overhead of having to do so on a per app basis.
-Refer to the following guided configuration guides using Easy Button templates for your integration requirements:
+Refer to the following guided configuration tutorials using Easy Button templates for your integration requirements:
- [F5 BIG-IP Easy Button for SSO to Kerberos applications](f5-big-ip-kerberos-easy-button.md)
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
In this article, you'll learn to implement Secure Hybrid Access (SHA) with singl
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
-* Improved zero-trust governance through Azure AD pre-authentication and authorization
+* Improved Zero Trust governance through Azure AD pre-authentication and authorization
* Full SSO between Azure AD and BIG-IP published services
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
In this article, you'll learn to implement Secure Hybrid Access (SHA) with singl
Configuring BIG-IP published applications with Azure AD provides many benefits, including:
-* Improved zero-trust governance through Azure AD pre-authentication and authorization
+* Improved Zero Trust governance through Azure AD pre-authentication and authorization
* Full SSO between Azure AD and BIG-IP published services
active-directory Howto Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md
Previously updated : 12/21/2021 Last updated : 01/10/2022 -+ # How to use Azure Monitor workbooks for Azure Active Directory reports
You can filter the Risky Users trends by:
If you have a high number of risky users where "no action" has been taken, consider enabling a Conditional Access policy to require secure password change when a user is high risk.
+## Best practices
+
+### Query partially succeeded
+
+After running a workbook, you might see the following error: "Query partially succeeded; results may be incomplete or incorrect"
+
+This error means that your query timed out in the database layer. In this case, it still ΓÇ£succeededΓÇ¥ to workbooks (it got results) but the results also contained an error/warning message that some part of the query failed. In this case, you review your query and start troubleshooting by reducing the scope of it.
+For example, you could add or rearrange a where condition to reduce the amount of data the query has to process.
+++ ## Next steps * [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).
active-directory Asset Planner Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/asset-planner-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Asset Planner'
+description: Learn how to configure single sign-on between Azure Active Directory and Asset Planner.
++++++++ Last updated : 12/20/2021++++
+# Tutorial: Azure AD SSO integration with Asset Planner
+
+In this tutorial, you'll learn how to integrate Asset Planner with Azure Active Directory (Azure AD). When you integrate Asset Planner with Azure AD, you can:
+
+* Control in Azure AD who has access to Asset Planner.
+* Enable your users to be automatically signed-in to Asset Planner with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Asset Planner single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Asset Planner supports **SP** initiated SSO.
+
+* Asset Planner supports **Just In Time** user provisioning.
+
+## Add Asset Planner from the gallery
+
+To configure the integration of Asset Planner into Azure AD, you need to add Asset Planner from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Asset Planner** in the search box.
+1. Select **Asset Planner** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Asset Planner
+
+Configure and test Azure AD SSO with Asset Planner using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Asset Planner.
+
+To configure and test Azure AD SSO with Asset Planner, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Asset Planner SSO](#configure-asset-planner-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Asset Planner test user](#create-asset-planner-test-user)** - to have a counterpart of B.Simon in Asset Planner that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Asset Planner** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type one of the following URLs:
+
+ | **Identifier** |
+ ||
+ | `https://assetplanner.com` |
+ | `https://us.assetplanner.com` |
+ | `https://staging.assetplanner.com` |
+ | `https://training.assetplanner.com` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://assetplanner.com/saml/ap_acs/<IDPName>` |
+ | `https://us.assetplanner.com/saml/ap_acs/<IDPName>` |
+ | `https://staging.assetplanner.com/saml/ap_acs/<IDPName>` |
+ | `https://training.assetplanner.com/saml/ap_acs/<IDPName>` |
+
+ c. In the **Sign on URL** text box, type one of the following URLs:
+
+ | **Sign on URL** |
+ |--|
+ | `https://assetplanner.com` |
+ | `https://us.assetplanner.com` |
+ | `https://staging.assetplanner.com` |
+ | `https://training.assetplanner.com` |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Asset Planner** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Asset Planner.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Asset Planner**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Asset Planner SSO
+
+To configure single sign-on on **Asset Planner** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Asset Planner support team](mailto:support@assetplanner.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Asset Planner test user
+
+In this section, a user called Britta Simon is created in Asset Planner. Asset Planner supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Asset Planner, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Asset Planner Sign-on URL where you can initiate the login flow.
+
+* Go to Asset Planner Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Asset Planner tile in the My Apps, this will redirect to Asset Planner Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Asset Planner you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Blue Ocean Brain Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/blue-ocean-brain-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Blue Ocean Brain'
+description: Learn how to configure single sign-on between Azure Active Directory and Blue Ocean Brain.
++++++++ Last updated : 12/30/2021++++
+# Tutorial: Azure AD SSO integration with Blue Ocean Brain
+
+In this tutorial, you'll learn how to integrate Blue Ocean Brain with Azure Active Directory (Azure AD). When you integrate Blue Ocean Brain with Azure AD, you can:
+
+* Control in Azure AD who has access to Blue Ocean Brain.
+* Enable your users to be automatically signed-in to Blue Ocean Brain with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Blue Ocean Brain single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Blue Ocean Brain supports **SP and IDP** initiated SSO.
+* Blue Ocean Brain supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Blue Ocean Brain from the gallery
+
+To configure the integration of Blue Ocean Brain into Azure AD, you need to add Blue Ocean Brain from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Blue Ocean Brain** in the search box.
+1. Select **Blue Ocean Brain** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Blue Ocean Brain
+
+Configure and test Azure AD SSO with Blue Ocean Brain using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Blue Ocean Brain.
+
+To configure and test Azure AD SSO with Blue Ocean Brain, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Blue Ocean Brain SSO](#configure-blue-ocean-brain-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Blue Ocean Brain test user](#create-blue-ocean-brain-test-user)** - to have a counterpart of B.Simon in Blue Ocean Brain that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Blue Ocean Brain** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://www3.blueoceanbrain.com`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://www3.blueoceanbrain.com/c/<friendly id>/saml/acs`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://www3.blueoceanbrain.com/c/<friendly id>/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [Blue Ocean Brain Client support team](mailto:support@blueoceanbrain.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Blue Ocean Brain application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Blue Ocean Brain application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | -| |
+ | FirstName | user.givenname |
+ | LastName | user.surname |
+ | Email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Blue Ocean Brain.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Blue Ocean Brain**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Blue Ocean Brain SSO
+
+To configure single sign-on on **Blue Ocean Brain** side, you need to send the **App Federation Metadata Url** to [Blue Ocean Brain support team](mailto:support@blueoceanbrain.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Blue Ocean Brain test user
+
+In this section, a user called Britta Simon is created in Blue Ocean Brain. Blue Ocean Brain supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Blue Ocean Brain, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Blue Ocean Brain Sign on URL where you can initiate the login flow.
+
+* Go to Blue Ocean Brain Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Blue Ocean Brain for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Blue Ocean Brain tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Blue Ocean Brain for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Blue Ocean Brain you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Drtrack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/drtrack-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with DRTrack'
+description: Learn how to configure single sign-on between Azure Active Directory and DRTrack.
++++++++ Last updated : 12/29/2021++++
+# Tutorial: Azure AD SSO integration with DRTrack
+
+In this tutorial, you'll learn how to integrate DRTrack with Azure Active Directory (Azure AD). When you integrate DRTrack with Azure AD, you can:
+
+* Control in Azure AD who has access to DRTrack.
+* Enable your users to be automatically signed-in to DRTrack with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* DRTrack single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* DRTrack supports **SP and IDP** initiated SSO.
+
+## Add DRTrack from the gallery
+
+To configure the integration of DRTrack into Azure AD, you need to add DRTrack from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **DRTrack** in the search box.
+1. Select **DRTrack** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for DRTrack
+
+Configure and test Azure AD SSO with DRTrack using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in DRTrack.
+
+To configure and test Azure AD SSO with DRTrack, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure DRTrack SSO](#configure-drtrack-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create DRTrack test user](#create-drtrack-test-user)** - to have a counterpart of B.Simon in DRTrack that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **DRTrack** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ ||
+ | `https://<CustomerName>.appiangps.com` |
+ | `https://<CustomerName>.routetracking.com` |
+ | `https://<CustomerName>.appiantracking.com` |
+ | `https://<CustomerName>.drtrack.trimblemaps.com` |
+ | `https://<CustomerName>.staging.appiantesting.com` |
+ | `https://<CustomerName>.qa.appiantesting` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |--|
+ | `https://<CustomerName>.appiangps.com/AssertionConsumer.aspx` |
+ | `https://<CustomerName>.routetracking.com/AssertionConsumer.aspx` |
+ | `https://<CustomerName>.appiantracking.com/AssertionConsumer.aspx` |
+ | `https://<CustomerName>.drtrack.trimblemaps.com/AssertionConsumer.a` |
+
+ c. In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ ||
+ | `https://<CustomerName>.appiangps.com/Login.aspx` |
+ | `https://<CustomerName>.routetracking.com/Login.aspx` |
+ | `https://<CustomerName>.appiantracking.com/Login.aspx` |
+ | `https://<CustomerName>.drtrack.trimblemaps.com/Login.aspx` |
+ | `https://<CustomerName>.staging.appiantesting` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [DRTrack Client support team](mailto:support-appian@trimblemaps.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up DRTrack** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to DRTrack.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **DRTrack**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure DRTrack SSO
+
+To configure single sign-on on **DRTrack** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [DRTrack support team](mailto:support-appian@trimblemaps.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create DRTrack test user
+
+In this section, you create a user called Britta Simon in DRTrack. Work with [DRTrack support team](mailto:support-appian@trimblemaps.com) to add the users in the DRTrack platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to DRTrack Sign on URL where you can initiate the login flow.
+
+* Go to DRTrack Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the DRTrack for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the DRTrack tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the DRTrack for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure DRTrack you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Excelity Hcm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/excelity-hcm-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Excelity HCM'
+description: Learn how to configure single sign-on between Azure Active Directory and Excelity HCM.
++++++++ Last updated : 12/10/2021++++
+# Tutorial: Azure AD SSO integration with Excelity HCM
+
+In this tutorial, you'll learn how to integrate Excelity HCM with Azure Active Directory (Azure AD). When you integrate Excelity HCM with Azure AD, you can:
+
+* Control in Azure AD who has access to Excelity HCM.
+* Enable your users to be automatically signed-in to Excelity HCM with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Excelity HCM single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Excelity HCM supports **IDP** initiated SSO.
+
+## Add Excelity HCM from the gallery
+
+To configure the integration of Excelity HCM into Azure AD, you need to add Excelity HCM from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Excelity HCM** in the search box.
+1. Select **Excelity HCM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Excelity HCM
+
+Configure and test Azure AD SSO with Excelity HCM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Excelity HCM.
+
+To configure and test Azure AD SSO with Excelity HCM, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Excelity HCM SSO](#configure-excelity-hcm-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Excelity HCM test user](#create-excelity-hcm-test-user)** - to have a counterpart of B.Simon in Excelity HCM that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Excelity HCM** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. Excelity HCM application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Excelity HCM application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | --| |
+ | country | user.country |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Excelity HCM.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Excelity HCM**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Excelity HCM SSO
+
+To configure single sign-on on **Excelity HCM** side, you need to send the **App Federation Metadata Url** to [Excelity HCM support team](mailto:HCM.Support@ceridian.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Excelity HCM test user
+
+In this section, you create a user called Britta Simon in Excelity HCM. Work with [Excelity HCM support team](mailto:HCM.Support@ceridian.com) to add the users in the Excelity HCM platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Excelity HCM for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Excelity HCM tile in the My Apps, you should be automatically signed in to the Excelity HCM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Excelity HCM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Flexera One Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/flexera-one-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Flexera One'
+description: Learn how to configure single sign-on between Azure Active Directory and Flexera One.
++++++++ Last updated : 12/29/2021++++
+# Tutorial: Azure AD SSO integration with Flexera One
+
+In this tutorial, you'll learn how to integrate Flexera One with Azure Active Directory (Azure AD). When you integrate Flexera One with Azure AD, you can:
+
+* Control in Azure AD who has access to Flexera One.
+* Enable your users to be automatically signed-in to Flexera One with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Flexera One single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Flexera One supports **SP and IDP** initiated SSO.
+* Flexera One supports **Just In Time** user provisioning.
+
+## Add Flexera One from the gallery
+
+To configure the integration of Flexera One into Azure AD, you need to add Flexera One from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Flexera One** in the search box.
+1. Select **Flexera One** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Flexera One
+
+Configure and test Azure AD SSO with Flexera One using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Flexera One.
+
+To configure and test Azure AD SSO with Flexera One, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Flexera One SSO](#configure-flexera-one-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Flexera One test user](#create-flexera-one-test-user)** - to have a counterpart of B.Simon in Flexera One that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Flexera One** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://secure.flexera.com/sso/saml2/<ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://secure.flexera.com/sso/saml2/<ID>`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://secure.flexera.com/sso/saml2/<ID>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Flexera One Client support team](mailto:support@flexera.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Flexera One application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Flexera One application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | - | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Flexera One** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Flexera One.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Flexera One**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Flexera One SSO
+
+To configure single sign-on on **Flexera One** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Flexera One support team](mailto:support@flexera.com). They set this setting to have the SAML SSO connection set properly on both sides. Learn [how](https://docs.flexera.com/flexera/EN/Administration/AzureADSSO.htm).
+
+### Create Flexera One test user
+
+In this section, a user called Britta Simon is created in Flexera One. Flexera One supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Flexera One, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Flexera One Sign on URL where you can initiate the login flow.
+
+* Go to Flexera One Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Flexera One for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Flexera One tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Flexera One for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Flexera One you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Intacct Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/intacct-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Sage Intacct | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory integration with Sage Intacct'
description: Learn how to configure single sign-on between Azure Active Directory and Sage Intacct.
Previously updated : 01/15/2021 Last updated : 01/05/2022
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Sage Intacct supports **IDP** initiated SSO
+* Sage Intacct supports **IDP** initiated SSO.
## Adding Sage Intacct from the gallery
To configure and test Azure AD SSO with Sage Intacct, complete the following ste
1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 2. **[Configure Sage Intacct SSO](#configure-sage-intacct-sso)** - to configure the Single Sign-On settings on application side.
- 1. **[Create Sage Intacct test user](#create-sage-intacct-test-user)** - to have a counterpart of B.Simon in Sage Intacct that is linked to the Azure AD representation of user.
+ 1. **[Set up individual users in Intacct](#set-up-individual-users-in-intacct)** - to have a counterpart of B.Simon in Sage Intacct that is linked to the Azure AD representation of user.
6. **[Test SSO](#test-sso)** - to verify whether the configuration works. ### Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, enter the values for the following fields: In the **Reply URL** text box, add the following URLs:
- `https://www.intacct.com/ia/acct/sso_response.phtml` (Select as the default.)
- `https://www.p-02.intacct.com/ia/acct/sso_response.phtml`
- `https://www.p-03.intacct.com/ia/acct/sso_response.phtml`
- `https://www.p-04.intacct.com/ia/acct/sso_response.phtml`
- `https://www.p-05.intacct.com/ia/acct/sso_response.phtml`
+
+ | Reply URL |
+ | - |
+ | `https://www.intacct.com/ia/acct/sso_response.phtml` (Select as the default.) |
+ | `https://www.p-02.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www.p-03.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www.p-04.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www.p-05.intacct.com/ia/acct/sso_response.phtml` |
+ |
1. The Sage Intacct application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog..
Follow these steps to enable Azure AD SSO in the Azure portal.
| Attribute Name | Source Attribute| | | | | Company Name | **Sage Intacct Company ID** |
- | name | Value should be same as the Sage Intacct **User ID**, which you enter in the **Create Sage Intacct test user section**, which is explained later in the tutorial |
+ | name | `<User ID>`|
+
+ > [!NOTE]
+ > Enter the `<User ID>` value should be same as the Sage Intacct **User ID**, which you enter in the **[Set up individual users in Intacct](#set-up-individual-users-in-intacct)**, which is explained later in the tutorial
a. Click **Add new claim** to open the **Manage user claims** dialog.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to your Sage Intacct company site as an administrator.
-1. Click the **Company** tab, and then click **Company Info**.
+1. Go to **Company**, click the **Setup** tab, and click **Company** under the Configuration section.
- ![Company](./media/intacct-tutorial/ic790037.png "Company")
+ ![Company](./media/intacct-tutorial/setup.png)
1. Click the **Security** tab, and then click **Edit**.
- ![Security](./media/intacct-tutorial/ic790038.png "Security")
+ ![screenshot for Security](./media/intacct-tutorial/security.png "Security")
1. In the **Single sign on (SSO)** section, perform the following steps:
- ![Single sign on](./media/intacct-tutorial/ic790039.png "single sign on")
+ ![Single sign on](./media/intacct-tutorial/intacct-configuration.png)
a. Select **Enable single sign on**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
d. In **Login URL** textbox, paste the value of **Login URL**, which you have copied from Azure portal. e. Open your **base-64** encoded certificate in notepad, copy the content of it into your clipboard, and then paste it to the **Certificate** box.
+
+ f. Set **Requested authentication content type** to **Exact**.
- f. Click **Save**.
+ g. Click **Save**.
-### Create Sage Intacct test user
+### Set up individual users in Intacct
-To set up Azure AD users so they can sign in to Sage Intacct, they must be provisioned into Sage Intacct. For Sage Intacct, provisioning is a manual task.
+When SSO is enabled for your company, you can individually require users to use SSO when logging in to your company. After you set up a user for SSO, the user will no longer be able to use a password to log in to your company directly. Instead, that user will need to use single sign-on and will be authenticated by your SSO identity provider as being an authorized user. Any users who aren't set up for SSO can continue to log in to your company using the basic signin page.
-**To provision user accounts, perform the following steps:**
+**To enable SSO for a user, perform the following steps:**
1. Sign in to your **Sage Intacct** tenant.
-1. Click the **Company** tab, and then click **Users**.
-
- ![Users](./media/intacct-tutorial/ic790041.png "Users")
-
-1. Click the **Add** tab.
+1. Go to **Company**, click the **Admin** tab, then click **Users**.
- ![Add](./media/intacct-tutorial/ic790042.png "Add")
+ ![Screenshot for Users](./media/intacct-tutorial/users.png "Users")
-1. In the **User Information** section, perform the following steps:
+1. Locate the desired user and click **Edit** next to it.
- ![Screenshot shows the User Information section where you can enter the information in this step.](./media/intacct-tutorial/ic790043.png "User Information")
-
- a. Enter the **User ID**, the **Last name**, **First name**, the **Email address**, the **Title**, and the **Phone** of an Azure AD account that you want to provision into the **User Information** section.
-
- > [!NOTE]
- > Make sure that the **User ID** in above screenshot and the **Source Attribute** value which is mapped with the **name** attribute in the **User Attributes** section in the Azure portal should be same.
-
- b. Select the **Admin privileges** of an Azure AD account that you want to provision.
-
- c. Click **Save**.
-
- d. The Azure AD account holder receives an email and follows a link to confirm their account before it becomes active.
+ ![Edit the user](./media/intacct-tutorial/user-edit.png "edit")
1. Click **Single sign-on** tab and make sure that the **Federated SSO user ID** in below screenshot and the **Source Attribute** value which is mapped with the `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier` in the **User Attributes** section in the Azure portal should be same.
- ![Screenshot shows the User Information section where you can enter the Federated S S O user i d.](./media/intacct-tutorial/ic790044.png "User Information")
+ ![Screenshot shows the User Information section where you can enter the Federated S S O user i d.](./media/intacct-tutorial/user-information.png "User Information")
> [!NOTE] > To provision Azure AD user accounts, you can use other Sage Intacct user account creation tools or APIs that are provided by Sage Intacct.
active-directory Jooto Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jooto-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Jooto'
+description: Learn how to configure single sign-on between Azure Active Directory and Jooto.
++++++++ Last updated : 12/29/2021++++
+# Tutorial: Azure AD SSO integration with Jooto
+
+In this tutorial, you'll learn how to integrate Jooto with Azure Active Directory (Azure AD). When you integrate Jooto with Azure AD, you can:
+
+* Control in Azure AD who has access to Jooto.
+* Enable your users to be automatically signed-in to Jooto with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Jooto single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Jooto supports **SP and IDP** initiated SSO.
+* Jooto supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Jooto from the gallery
+
+To configure the integration of Jooto into Azure AD, you need to add Jooto from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Jooto** in the search box.
+1. Select **Jooto** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Jooto
+
+Configure and test Azure AD SSO with Jooto using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Jooto.
+
+To configure and test Azure AD SSO with Jooto, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Jooto SSO](#configure-jooto-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Jooto test user](#create-jooto-test-user)** - to have a counterpart of B.Simon in Jooto that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Jooto** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **SP** initiated mode then perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://app.jooto.com/`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://app.jooto.com/auth/sso/callback`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://app.jooto.com/auth/sso/callback`
+
+ d. In the **Relay State** text box, type a value using the following pattern:
+ `<ID>`
+
+1. Jooto application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Jooto application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | -- | |
+ | first_name | user.givenname |
+ | last_name | user.surname |
+ | email | user.mail |
+ | username | user.userprincipalname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Jooto** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Jooto.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Jooto**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Jooto SSO
+
+To configure single sign-on on **Jooto** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Jooto support team](mailto:jooto-success@prtimes.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Jooto test user
+
+In this section, a user called Britta Simon is created in Jooto. Jooto supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Jooto, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Jooto Sign on URL where you can initiate the login flow.
+
+* Go to Jooto Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Jooto for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Jooto tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Jooto for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Jooto you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-configuration.md
az aks update -n aks -g myResourceGroup --enable-oidc-issuer
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd-preview
+[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-security.md
For more information on core Kubernetes and AKS concepts, see:
[aks-daemonsets]: concepts-clusters-workloads.md#daemonsets [aks-upgrade-cluster]: upgrade-cluster.md [aks-aad]: ./managed-aad.md
-[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd-preview
+[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd
[aks-concepts-clusters-workloads]: concepts-clusters-workloads.md [aks-concepts-identity]: concepts-identity.md [aks-concepts-scale]: concepts-scale.md
aks Windows Container Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-container-cli.md
az aks nodepool add \
The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
-## Optional: Using `containerd` with Windows Server node pools (preview)
+## Optional: Using `containerd` with Windows Server node pools
-Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` as the container runtime for Windows Server 2019 node pools.
+Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` as the container runtime for Windows Server 2019 node pools. From Kubernetes 1.23, containerd will be the default container runtime for Windows.
-
-You will need the *aks-preview* Azure CLI extension version 0.5.24 or greater. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
> [!IMPORTANT] > When using `containerd` with Windows Server 2019 node pools:
az extension update --name aks-preview
> - When creating or updating a node pool to run Windows Server containers, the default value for *node-vm-size* is *Standard_D2s_v3* which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the *node-vm-size* parameter, please check the list of [restricted VM sizes][restricted-vm-sizes]. > - It is highly recommended that you use [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly.
-Register the `UseCustomizedWindowsContainerRuntime` feature flag using the [az feature register][az-feature-register] command as shown in the following example:
-
-```azurecli
-az feature register --namespace "Microsoft.ContainerService" --name "UseCustomizedWindowsContainerRuntime"
-```
-
-You can check on the registration status using the [az feature list][az-feature-list] command:
-
-```azurecli
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/UseCustomizedWindowsContainerRuntime')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the Microsoft.ContainerService resource provider using the [az provider register][az-provider-register] command:
-
-```azurecli
-az provider register --namespace Microsoft.ContainerService
-```
-### Add a Windows Server node pool with `containerd` (preview)
+### Add a Windows Server node pool with `containerd`
Use the `az aks nodepool add` command to add an additional node pool that can run Windows Server containers with the `containerd` runtime.
az aks nodepool add \
The above command creates a new Windows Server node pool using `containerd` as the runtime named *npwcd* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
-### Upgrade an existing Windows Server node pool to `containerd` (preview)
+### Upgrade an existing Windows Server node pool to `containerd`
Use the `az aks nodepool upgrade` command to upgrade a specific node pool from Docker to `containerd`.
api-management Add Api Manually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/add-api-manually.md
Previously updated : 04/26/2021 Last updated : 12/16/2021 -+ # Add an API manually
-The steps in this article show how to use the Azure portal to add an API manually to the API Management (APIM) instance. A common scenario when you would want to create a blank API and define it manually is when you want to mock the API. For details about mocking an API, see [Mock API responses](mock-api-responses.md).
+This article shows steps to add an API manually to the API Management (APIM) instance. When you want to mock the APIAc, you can create a blank API or define it manually. For details about mocking an API, see [Mock API responses](mock-api-responses.md).
If you want to import an existing API, see [related topics](#related-topics) section.
Complete the following quickstart: [Create an Azure API Management instance](get
1. Navigate to your API Management service in the Azure portal and select **APIs** from the menu. 2. From the left menu, select **+ Add API**.
-3. Select **Blank API** from the list.
- ![Blank API](media/add-api-manually/blank-api.png)
+3. Select **HTTP** from the list.
+ ![Blank API](media/add-api-manually/blank-api-1.png)
4. Enter settings for the API. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial. 5. Select **Create**. At this point, you have no operations in API Management that map to the operations in your back-end API. If you call an operation that is exposed through the back end but not through the API Management, you get a **404**.
->[!NOTE]
-> By default, when you add an API, even if it is connected to some back-end service, APIM will not expose any operations until you allow them. To allow an operation of your back-end service, create an APIM operation that maps to the back-end operation.
+>[!NOTE]
+> By default, when you add an API, even if it's connected to some back-end service, APIM won't expose any operations until you allow them. To allow an operation of your back-end service, create an APIM operation that maps to the back-end operation.
## Add and test an operation
-This section shows how to add a "/get" operation in order to map it to the back end "http://httpbin.org/get" operation.
+This section shows how to add a "/get" operation to map it to the back end "http://httpbin.org/get" operation.
### Add an operation 1. Select the API you created in the previous step.
-2. Click **+ Add Operation**.
+2. Select **+ Add Operation**.
3. In the **URL**, select **GET** and enter `/get` in the resource. 4. Enter "*FetchData*" for **Display name**. 5. Select **Save**. ### Test an operation
-Test the operation in the Azure portal. Alternatively, you can test it in the **Developer portal**.
+Test the operation in the Azure portal. You can also test it in the **Developer portal**.
1. Select the **Test** tab. 2. Select **FetchData**.
This section shows how to add an operation that takes a parameter. In this case,
### Add the operation 1. Select the API you created in the previous step.
-2. Click **+ Add Operation**.
+2. Select **+ Add Operation**.
3. In the **URL**, select **GET** and enter `/status/{code}` in the resource. Optionally, you can provide some information associated with this parameter. For example, enter "*Number*" for **TYPE**, "*200*" (default) for **VALUES**. 4. Enter "GetStatus" for **Display name**. 5. Select **Save**.
-### Test the operation
+### Test the operation
-Test the operation in the Azure portal. Alternatively, you can test it in the **Developer portal**.
+Test the operation in the Azure portal. You can also test it in the **Developer portal**.
1. Select the **Test** tab. 2. Select **GetStatus**. By default the code value is set to "*200*". You can change it to test other values. For example, type "*418*".
This section shows how to add a wildcard operation. A wildcard operation lets yo
### Add the operation 1. Select the API you created in the previous step.
-2. Click **+ Add Operation**.
+2. Select **+ Add Operation**.
3. In the **URL**, select **GET** and enter `/*` in the resource. 4. Enter "*WildcardGet*" for **Display name**. 5. Select **Save**. ### Test the operation
-Test the operation in the Azure portal. Alternatively, you can test it in the **Developer portal**.
+Test the operation in the Azure portal. You can also test it in the **Developer portal**.
1. Select the **Test** tab.
-2. Select **WildcardGet**. Try one or more of the GET operations that you tested in previous sections, or try a different supported GET operation.
+2. Select **WildcardGet**. Try the GET operations that you tested in previous sections, or try a different supported GET operation.
For example, in **Template parameters**, update the value next to the wildcard (*) name to `headers`. The operation returns the incoming request's HTTP headers. 1. Press **Send**. The response that the "http://httpbin.org/headers" operation generates appears. If you want to transform your operations, see [Transform and protect your API](transform-api.md).
->[!NOTE]
-> It can be important to understand how the host for the backend API you are integrating with handles trailing slashes on an operation URL. For more information, please see this [API Management FAQ](./api-management-faq.yml#how-does-api-management-handle-trailing-slashes-when-calling-backend-services-).
+>[!NOTE]
+> It can be important to understand how the host for the backend API you're integrating with handles trailing slashes on an operation URL. For more information, see this [API Management FAQ](./api-management-faq.yml#how-does-api-management-handle-trailing-slashes-when-calling-backend-services-).
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-append-apis.md)]
api-management Api Management Howto Api Inspector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-api-inspector.md
documentationcenter: '' editor: ''- Previously updated : 10/30/2020 Last updated : 12/10/2021 -+ # Tutorial: Debug your APIs using request tracing
-This tutorial describes how to inspect (trace) request processing in Azure API Management to help you debug and troubleshoot your API.
+This tutorial describes how to inspect (trace) request processing in Azure API Management. Tracing helps you debug and troubleshoot your API.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Trace an example call > * Review request processing steps ## Prerequisites
In this tutorial, you learn how to:
+ Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md). + Complete the following tutorial: [Import and publish your first API](import-and-publish.md).
-## Verify allow tracing setting
+## Verify allow tracing setting
The **Allow tracing** setting for the subscription used for your API must be enabled. If you're using the built-in all-access subscription, it's enabled by default. To verify in the portal, navigate to your API Management instance and select **Subscriptions**.
- :::image type="content" source="media/api-management-howto-api-inspector/allow-tracing.png" alt-text="Allow tracing for subscription":::
+ :::image type="content" source="media/api-management-howto-api-inspector/allow-tracing-1.png" alt-text="Allow tracing for subscription":::
## Trace a call
The **Allow tracing** setting for the subscription used for your API must be ena
1. Select **Demo Conference API** from your API list. 1. Select the **Test** tab. 1. Select the **GetSpeakers** operation.
-1. Confirm that the HTTP request header includes **Ocp-Apim-Trace: True** and a valid value for **Ocp-Apim-Subscription-Key**. If it isn't, select **+ Add Header** to add the header.
+1. Confirm that the HTTP request header includes **Ocp-Apim-Trace: True** and a valid value for **Ocp-Apim-Subscription-Key**. If it doesn't, select **+ Add header** to add the header.
1. Select **Send** to make an API call.
- :::image type="content" source="media/api-management-howto-api-inspector/06-debug-your-apis-01-trace-call.png" alt-text="Configure API tracing":::
+ :::image type="content" source="media/api-management-howto-api-inspector/06-debug-your-apis-01-trace-call-1.png" alt-text="Configure API tracing":::
> [!TIP] > If **Ocp-Apim-Subscription-Key** isn't automatically populated in the HTTP request, you can retrieve it in the portal. Select **Subscriptions**, and open the context menu (**...**) for your suscription. Select **Show/hide keys**. You can also regenerate keys if needed. Then, add a key to the header. ## Review trace information
-1. After the call completes, go to the **Trace** tab in the **HTTP Response**.
+1. After the call completes, go to the **Trace** tab in the **HTTP response**.
1. Select any of the following links to jump to detailed trace info: **Inbound**, **Backend**, **Outbound**.
- :::image type="content" source="media/api-management-howto-api-inspector/response-trace.png" alt-text="Review response trace":::
+ :::image type="content" source="media/api-management-howto-api-inspector/response-trace-1.png" alt-text="Review response trace":::
- * **Inbound** - Shows the original request API Management received from the caller and the policies applied to the request. For example, if you added policies in [Tutorial: Transform and protect your API](transform-api.md), they will appear here.
+ * **Inbound** - Shows the original request API Management received from the caller and the policies applied to the request. For example, if you added policies in [Tutorial: Transform and protect your API](transform-api.md), they'll appear here.
* **Backend** - Shows the requests API Management sent to the API backend and the response it received.
The **Allow tracing** setting for the subscription used for your API must be ena
1. On the **Message** tab, the **ocp-apim-trace-location** header shows the location of the trace data stored in Azure blob storage. If needed, go to this location to retrieve the trace.
- :::image type="content" source="media/api-management-howto-api-inspector/response-message.png" alt-text="Trace location in Azure Storage":::
+ :::image type="content" source="media/api-management-howto-api-inspector/response-message-1.png" alt-text="Trace location in Azure Storage":::
## Next steps In this tutorial, you learned how to:
api-management Api Management Howto Use Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-use-azure-monitor.md
Title: Tutorial - Monitor published APIs in Azure API Management | Microsoft Docs
-description: Follow the steps of this tutorial to learn how to use metrics, alerts, activity logs, and resource logs to monitor your APIs in Azure API Management.
+description: Learn how to use metrics, alerts, activity logs, and resource logs to monitor your APIs in Azure API Management.
-+ Previously updated : 10/14/2020 Last updated : 12/09/2021 # Tutorial: Monitor published APIs
With Azure Monitor, you can visualize, query, route, archive, and take actions o
In this tutorial, you learn how to: > [!div class="checklist"]
-> * View metrics of your API
-> * Set up an alert rule
+> * View metrics of your API
+> * Set up an alert rule
> * View activity logs > * Enable and view resource logs
You can also use API Management's built-in [analytics](howto-use-analytics.md) t
API Management emits [metrics](../azure-monitor/essentials/data-platform-metrics.md) every minute, giving you near real-time visibility into the state and health of your APIs. The following are the two most frequently used metrics. For a list of all available metrics, see [supported metrics](../azure-monitor/essentials/metrics-supported.md#microsoftapimanagementservice). * **Capacity** - helps you make decisions about upgrading/downgrading your APIM services. The metric is emitted per minute and reflects the gateway capacity at the time of reporting. The metric ranges from 0-100 calculated based on gateway resources such as CPU and memory utilization.
-* **Requests** - helps you analyze API traffic going through your API Management services. The metric is emitted per minute and reports the number of gateway requests with dimensions including response codes, location, hostname, and errors.
+* **Requests** - helps you analyze API traffic going through your API Management services. The metric is emitted per minute and reports the number of gateway requests with dimensions. Filter requests by response codes, location, hostname, and errors.
> [!IMPORTANT] > The following metrics have been deprecated as of May 2019 and will be retired in August 2023: Total Gateway Requests, Successful Gateway Requests, Unauthorized Gateway Requests, Failed Gateway Requests, Other Gateway Requests. Please migrate to the Requests metric which provides equivalent functionality. To access metrics:
To access metrics:
:::image type="content" source="media/api-management-howto-use-azure-monitor/api-management-metrics-blade.png" alt-text="Screenshot of Metrics item in Monitoring menu":::
-1. From the drop-down, select metrics you are interested in. For example, **Requests**.
+1. From the drop-down, select metrics you're interested in. For example, **Requests**.
1. The chart shows the total number of API calls.
-1. The chart can be filtered using the dimensions of the **Requests** metric. For example, select **Add filter**, select **Backend Response Code Category**, enter 500 as the value. Now the chart shows the number of requests that were failed in the API backend.
+1. You can filter the chart using the dimensions of the **Requests** metric. For example, select **Add filter**, select **Backend Response Code Category**, enter 500 as the value. The chart shows the number of requests failed in the API backend.
-## Set up an alert rule
+## Set up an alert rule
You can receive [alerts](../azure-monitor/alerts/alerts-metric-overview.md) based on metrics and activity logs. Azure Monitor allows you to [configure an alert](../azure-monitor/alerts/alerts-metric.md) to do the following when it triggers:
To configure an example alert rule based on a request metric:
1. In **Signal name**, select **Requests**. 1. In **Split by dimensions**, in **Dimension name**, select **Gateway Response Code Category**. 1. In **Dimension values**, select **4xx**, for client errors such as unauthorized or invalid requests.
- 1. In **Alert logic**, specify a threshold after which the alert should be triggered and select **Done**.
+ 1. In **Alert logic**, specify a **Threshold value** after which the alert should be triggered.
+ 1. In **Evaluated based on**, specify **Aggregation granularity** and **Frequency of evaluation**, then select **Done**.
- :::image type="content" source="media/api-management-howto-use-azure-monitor/threshold.png" alt-text="Screenshot of Configure Signal Logic windows":::
+ :::image type="content" source="media/api-management-howto-use-azure-monitor/threshold-1.png" alt-text="Screenshot of Configure Signal Logic windows":::
1. Select an existing action group or create a new one. In the following example, a new action group is created. A notification email will be sent to admin@contoso.com. :::image type="content" source="media/api-management-howto-use-azure-monitor/action-details.png" alt-text="Screenshot of notifications for new action group":::
-1. Enter a name and description of the alert rule and select the severity level.
+1. Enter a name and description of the alert rule and select the severity level.
1. Select **Create alert rule**. 1. Now, test the alert rule by calling the Conference API without an API key. For example:
To configure an example alert rule based on a request metric:
curl GET https://apim-hello-world.azure-api.net/conference/speakers HTTP/1.1 ```
- An alert will be triggered based on the evaluation period, and email will be sent to admin@contoso.com.
+ An alert triggers based on the evaluation period, and it will send email to admin@contoso.com.
Alerts also appear on the **Alerts** page for the API Management instance.
To configure an example alert rule based on a request metric:
## Activity logs
-Activity logs provide insight into the operations that were performed on your API Management services. Using activity logs, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) taken on your API Management services.
+Activity logs provide insight into the operations on your API Management services. Using activity logs, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) taken on your API Management services.
> [!NOTE] > Activity logs do not include read (GET) operations or operations performed in the Azure portal or using the original Management APIs.
To view the activity log:
## Resource logs
-Resource logs provide rich information about operations and errors that are important for auditing as well as troubleshooting purposes. Resource logs differ from activity logs. The activity log provides insights into the operations that were performed on your Azure resources. Resource logs provide insight into operations that your resource performed.
+Resource logs provide rich information about operations and errors that are important for auditing and troubleshooting purposes. Resource logs differ from activity logs. The activity log provides insights into the operations run on your Azure resources. Resource logs provide insight into operations that your resource ran.
To configure resource logs:
To configure resource logs:
1. Select **+ Add diagnostic setting**. 1. Select the logs or metrics that you want to collect.
- You can archive resource logs along with metrics to a storage account, stream them to an Event Hub, or send them to a Log Analytics workspace.
+ You can archive resource logs along with metrics to a storage account, stream them to an Event Hub, or send them to a Log Analytics workspace.
For more information, see [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
In this tutorial, you learned how to:
> [!div class="checklist"] > * View metrics of your API
-> * Set up an alert rule
+> * Set up an alert rule
> * View activity logs > * Enable and view resource logs
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
To enable monitoring of the self-hosted gateway, configure the following Log Ana
* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md). * Discover all [Azure Arc-enabled Kubernetes extensions](../azure-arc/kubernetes/extensions.md). * Learn more about [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md).
+* Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
This article provides the steps for deploying self-hosted gateway component of A
* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md). * Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
+* Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
* Learn more about [Azure Kubernetes Service](../aks/intro-kubernetes.md). * Learn [how to configure and persist logs in the cloud](how-to-configure-cloud-metrics-logs.md). * Learn [how to configure and persist logs locally](how-to-configure-local-metrics-logs.md).
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
This article provides the steps for deploying self-hosted gateway component of A
```console helm install azure-api-management-gateway \
- --set gateway.endpoint='<your token>' \
- --set gateway.authKey='<your configuration url>' \
+ --set gateway.endpoint='<your configuration url>' \
+ --set gateway.authKey='<your token>' \
azure-apim-gateway/azure-api-management-gateway ```
This article provides the steps for deploying self-hosted gateway component of A
* Learn [how to configure and persist logs locally](how-to-configure-local-metrics-logs.md). [helm]: https://helm.sh/
-[helm-install]: https://helm.sh/docs/intro/install/
+[helm-install]: https://helm.sh/docs/intro/install/
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
This article describes the steps for deploying the self-hosted gateway component
> Run the `kubectl logs deployment/<gateway-name>` command to view logs from a randomly selected pod if there's more than one. > Run `kubectl logs -h` for a complete set of command options, such as how to view logs for a specific pod or container.
-## Production deployment considerations
-
-### Access token
-Without a valid access token, a self-hosted gateway can't access and download configuration data from the endpoint of the associated API Management service. The access token can be valid for a maximum of 30 days. It must be regenerated, and the cluster configured with a fresh token, either manually or via automation before it expires.
-
-When you're automating token refresh, use [this management API operation](/rest/api/apimanagement/current-ga/gateway/generate-token) to generate a new token. For information on managing Kubernetes secrets, see the [Kubernetes website](https://kubernetes.io/docs/concepts/configuration/secret).
-
-### Namespace
-Kubernetes [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) help with dividing a single cluster among multiple teams, projects, or applications. Namespaces provide a scope for resources and names. They can be associated with a resource quota and access control policies.
-
-The Azure portal provides commands to create self-hosted gateway resources in the **default** namespace. This namespace is automatically created, exists in every cluster, and can't be deleted.
-Consider [creating and deploying](https://www.kubernetesbyexample.com/) a self-hosted gateway into a separate namespace in production.
-
-### Number of replicas
-The minimum number of replicas suitable for production is two.
-
-By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
-
-### Container resources
-By default, the YAML file provided in the Azure portal doesn't specify container resource requests.
-
-It's impossible to reliably predict and recommend the amount of per-container CPU and memory resources and the number of replicas required for supporting a specific workload. Many factors are at play, such as:
--- Specific hardware that the cluster is running on.-- Presence and type of virtualization.-- Number and rate of concurrent client connections.-- Request rate.-- Kind and number of configured policies.-- Payload size and whether payloads are buffered or streamed.-- Backend service latency.-
-We recommend setting resource requests to two cores and 2 GiB as a starting point. Perform a load test and scale up/out or down/in based on the results.
-
-### Container image tag
-The YAML file provided in the Azure portal uses the **latest** tag. This tag always references the most recent version of the self-hosted gateway container image.
-
-Consider using a specific version tag in production to avoid unintentional upgrade to a newer version.
-
-You can [download a full list of available tags](https://mcr.microsoft.com/v2/azure-api-management/gateway/tags/list).
-
-### DNS policy
-DNS name resolution plays a critical role in a self-hosted gateway's ability to connect to dependencies in Azure and dispatch API calls to backend services.
-
-The YAML file provided in the Azure portal applies the default [ClusterFirst](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) policy. This policy causes name resolution requests not resolved by the cluster DNS to be forwarded to the upstream DNS server that's inherited from the node.
-
-To learn about name resolution in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service). Consider customizing [DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) or [DNS configuration](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config) as appropriate for your setup.
-
-### External traffic policy
-The YAML file provided in the Azure portal sets `externalTrafficPolicy` field on the [Service](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#service-v1-core) object to `Local`. This preserves caller IP address (accessible in the [request context](api-management-policy-expressions.md#ContextVariables)) and disables cross node load balancing, eliminating network hops caused by it. Be aware, that this setting might cause asymmetric distribution of traffic in deployments with unequal number of gateway pods per node.
-
-### Custom domain names and SSL certificates
-
-If you use custom domain names for the API Management endpoints, especially if you use a custom domain name for the Management endpoint, you might need to update the value of `config.service.endpoint` in the **\<gateway-name\>.yaml** file to replace the default domain name with the custom domain name. Make sure that the Management endpoint can be accessed from the pod of the self-hosted gateway in the Kubernetes cluster.
-
-In this scenario, if the SSL certificate that's used by the Management endpoint isn't signed by a well-known CA certificate, you must make sure that the CA certificate is trusted by the pod of the self-hosted gateway.
-
-### Configuration backup
-To learn about self-hosted gateway behavior in the presence of a temporary Azure connectivity outage, see [Self-hosted gateway overview](self-hosted-gateway-overview.md#connectivity-to-azure).
-
-Configure a local storage volume for the self-hosted gateway container, so it can persist a backup copy of the latest downloaded configuration. If connectivity is down, the storage volume can use the backup copy upon restart. The volume mount path must be <code>/apim/config</code>. See an example on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway/blob/master/examples/self-hosted-gateway-with-configuration-backup.yaml).
-To learn about storage in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/storage/volumes/).
-
-### Local logs and metrics
-The self-hosted gateway sends telemetry to [Azure Monitor](api-management-howto-use-azure-monitor.md) and [Azure Application Insights](api-management-howto-app-insights.md) according to configuration settings in the associated API Management service.
-When [connectivity to Azure](self-hosted-gateway-overview.md#connectivity-to-azure) is temporarily lost, the flow of telemetry to Azure is interrupted and the data is lost for the duration of the outage.
-Consider [setting up local monitoring](how-to-configure-local-metrics-logs.md) to ensure the ability to observe API traffic and prevent telemetry loss during Azure connectivity outages.
- ## Next steps * To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md). * Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
+* Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
+
+ Title: Self-hosted gateway on Kubernetes in production | Azure API Management
+description: Learn about guidance to run an API Management self-hosted gateway on Kubernetes for production workloads
++++++ Last updated : 12/17/2021++
+# Guidance for running self-hosted gateway on Kubernetes in production
+
+In order to run the self-hosted gateway in production, there are various aspects to take in to mind. For example, it should be deployed in a highly-available manner, use configuration backups to handle temporary disconnects and many more.
+
+This article provides guidance on how to run [self-hosted gateway](./self-hosted-gateway-overview.md) on Kubernetes for production workloads to ensure that it will run smoothly and reliably.
+
+## Access token
+Without a valid access token, a self-hosted gateway can't access and download configuration data from the endpoint of the associated API Management service. The access token can be valid for a maximum of 30 days. It must be regenerated, and the cluster configured with a fresh token, either manually or via automation before it expires.
+
+When you're automating token refresh, use [this management API operation](/rest/api/apimanagement/current-ga/gateway/generate-token) to generate a new token. For information on managing Kubernetes secrets, see the [Kubernetes website](https://kubernetes.io/docs/concepts/configuration/secret).
+
+## Namespace
+Kubernetes [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) help with dividing a single cluster among multiple teams, projects, or applications. Namespaces provide a scope for resources and names. They can be associated with a resource quota and access control policies.
+
+The Azure portal provides commands to create self-hosted gateway resources in the **default** namespace. This namespace is automatically created, exists in every cluster, and can't be deleted.
+Consider [creating and deploying](https://www.kubernetesbyexample.com/) a self-hosted gateway into a separate namespace in production.
+
+## Number of replicas
+The minimum number of replicas suitable for production is two.
+
+By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
+
+## Container resources
+By default, the YAML file provided in the Azure portal doesn't specify container resource requests.
+
+It's impossible to reliably predict and recommend the amount of per-container CPU and memory resources and the number of replicas required for supporting a specific workload. Many factors are at play, such as:
+
+- Specific hardware that the cluster is running on.
+- Presence and type of virtualization.
+- Number and rate of concurrent client connections.
+- Request rate.
+- Kind and number of configured policies.
+- Payload size and whether payloads are buffered or streamed.
+- Backend service latency.
+
+We recommend setting resource requests to two cores and 2 GiB as a starting point. Perform a load test and scale up/out or down/in based on the results.
+
+## Container image tag
+The YAML file provided in the Azure portal uses the **latest** tag. This tag always references the most recent version of the self-hosted gateway container image.
+
+Consider using a specific version tag in production to avoid unintentional upgrade to a newer version.
+
+You can [download a full list of available tags](https://mcr.microsoft.com/v2/azure-api-management/gateway/tags/list).
+
+> [!TIP]
+> When installing with Helm, image tagging is optimized for you. The Helm chart's application version pins the gateway to a given version and does not rely on `latest`.
+>
+> Learn more on how to [install an API Management self-hosted gateway on Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
+
+## DNS policy
+DNS name resolution plays a critical role in a self-hosted gateway's ability to connect to dependencies in Azure and dispatch API calls to backend services.
+
+The YAML file provided in the Azure portal applies the default [ClusterFirst](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) policy. This policy causes name resolution requests not resolved by the cluster DNS to be forwarded to the upstream DNS server that's inherited from the node.
+
+To learn about name resolution in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service). Consider customizing [DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) or [DNS configuration](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config) as appropriate for your setup.
+
+## External traffic policy
+The YAML file provided in the Azure portal sets `externalTrafficPolicy` field on the [Service](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#service-v1-core) object to `Local`. This preserves caller IP address (accessible in the [request context](api-management-policy-expressions.md#ContextVariables)) and disables cross node load balancing, eliminating network hops caused by it. Be aware, that this setting might cause asymmetric distribution of traffic in deployments with unequal number of gateway pods per node.
+
+## Custom domain names and SSL certificates
+
+If you use custom domain names for the API Management endpoints, especially if you use a custom domain name for the Management endpoint, you might need to update the value of `config.service.endpoint` in the **\<gateway-name\>.yaml** file to replace the default domain name with the custom domain name. Make sure that the Management endpoint can be accessed from the pod of the self-hosted gateway in the Kubernetes cluster.
+
+In this scenario, if the SSL certificate that's used by the Management endpoint isn't signed by a well-known CA certificate, you must make sure that the CA certificate is trusted by the pod of the self-hosted gateway.
+
+## Configuration backup
+
+Configure a local storage volume for the self-hosted gateway container, so it can persist a backup copy of the latest downloaded configuration. If connectivity is down, the storage volume can use the backup copy upon restart. The volume mount path must be <code>/apim/config</code>. See an example on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway/blob/master/examples/self-hosted-gateway-with-configuration-backup.yaml).
+To learn about storage in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/storage/volumes/).
+
+> [!NOTE]
+> To learn about self-hosted gateway behavior in the presence of a temporary Azure connectivity outage, see [Self-hosted gateway overview](self-hosted-gateway-overview.md#connectivity-to-azure).
+
+## Local logs and metrics
+The self-hosted gateway sends telemetry to [Azure Monitor](api-management-howto-use-azure-monitor.md) and [Azure Application Insights](api-management-howto-app-insights.md) according to configuration settings in the associated API Management service.
+When [connectivity to Azure](self-hosted-gateway-overview.md#connectivity-to-azure) is temporarily lost, the flow of telemetry to Azure is interrupted and the data is lost for the duration of the outage.
+Consider [setting up local monitoring](how-to-configure-local-metrics-logs.md) to ensure the ability to observe API traffic and prevent telemetry loss during Azure connectivity outages.
+
+## High availability
+The self-hosted gateway is a crucial component in the infrastructure and has to be highly available. However, failure will and can happen.
+
+Consider protecting the self-hosted gateway against [disruption](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/).
+
+> [!TIP]
+> When installing with Helm, easily enable high available scheduling by enabling the `highAvailability.enabled` configuration option.
+>
+> Learn more on how to [install an API Management self-hosted gateway on Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
+
+### Protecting against node failure
+To prevent being affected due to data center or node failures, consider using a Kubernetes cluster that uses availability zones to achieve high availability on the node-level.
+
+Availability zones allow you to schedule the self-hosted gateway's pod on nodes spread across the zones by using:
+- [Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) (Recommended - Kubernetes v1.19+)
+- [Pod Anti-Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/)
+
+> [!Note]
+> If you are using Azure Kubernetes Service, learn how to use availability zones in [this article](./../aks/availability-zones.md).
+
+### Protecting against pod disruption
+
+Pods can experience disruption due to [various](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) reasons such as manual pod deletion, node maintenance, etc.
+
+Consider using [Pod Disruption Budgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) to enforce a minimum number of pods to be available at any given time.
+
+## Next steps
+
+* To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
+* Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/self-hosted-gateway-overview.md
When connectivity is restored, each self-hosted gateway affected by the outage w
## Next steps - [Read a whitepaper for additional background on this topic](https://aka.ms/hybrid-and-multi-cloud-api-management)-- [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)-- [Deploy self-hosted gateway to Kubernetes with YAML](how-to-deploy-self-hosted-gateway-kubernetes.md)-- [Deploy self-hosted gateway to Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md)-- [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
+- Review the guidance on [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
+- Learn [how to deploy API Management self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)
+- Learn [how to deploy API Management self-hosted gateway to Kubernetes with YAML](how-to-deploy-self-hosted-gateway-kubernetes.md)
+- Learn [how to deploy API Management self-hosted gateway to Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md)
+- Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md)
api-management Visual Studio Code Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/visual-studio-code-tutorial.md
Title: Tutorial - Import and manage APIs - Azure API Management and Visual Studio Code | Microsoft Docs
-description: In this tutorial, learn how to use the Azure API Management Extension for Visual Studio Code to import, test, and manage APIs.
+description: Learn how to use the Azure API Management Extension for Visual Studio Code to import, test, and manage APIs.
Previously updated : 12/10/2020 Last updated : 12/27/2021+ # Tutorial: Use the API Management Extension for Visual Studio Code to import and manage APIs
You learn how to:
> * Apply API Management policies > * Test the API - :::image type="content" source="media/visual-studio-code-tutorial/tutorial-api-result.png" alt-text="API in API Management Extension":::
-For an introduction to additional API Management features, see the API Management tutorials using the [Azure portal](import-and-publish.md).
+For an introduction to more API Management features, see the API Management tutorials using the [Azure portal](import-and-publish.md).
## Prerequisites-- Understand [Azure API Management terminology](api-management-terminology.md)-- Ensure you have installed [Visual Studio Code](https://code.visualstudio.com/) and the latest [Azure API Management Extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-apimanagement&ssr=false#overview)-- [Create an API Management instance](vscode-create-service-instance.md)+
+* Understand [Azure API Management terminology](api-management-terminology.md).
+* Ensure you've installed [Visual Studio Code](https://code.visualstudio.com/) and the latest [Azure API Management Extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-apimanagement&ssr=false#overview).
+* [Create an API Management instance](vscode-create-service-instance.md).
## Import an API
The following example imports an OpenAPI Specification in JSON format into API M
1. In Visual Studio Code, select the Azure icon from the Activity Bar. 1. In the Explorer pane, expand the API Management instance you created.
-1. Right-click **APIs**, and select **Import from OpenAPI Link**.
+1. Right-click **APIs**, and select **Import from OpenAPI Link**.
1. When prompted, enter the following values:
- 1. An **OpenAPI link** for content in JSON format. For this example: *https://conferenceapi.azurewebsites.net?format=json*.
+ 1. An **OpenAPI link** for content in JSON format. For this example: *<https://conferenceapi.azurewebsites.net?format=json>*.
This URL is the service that implements the example API. API Management forwards requests to this address. 1. An **API name**, such as *demo-conference-api*, that is unique in the API Management instance. This name can contain only letters, number, and hyphens. The first and last characters must be alphanumeric. This name is used in the path to call the API.
After the API is imported successfully, it appears in the Explorer pane, and ava
## Edit the API
-You can edit the API in Visual Studio Code. For example, edit the Resource Manager JSON description of the API in the editor window to remove the **http** protocol used to access the API. Then select **File** > **Save**.
+You can edit the API in Visual Studio Code. For example, edit the Resource Manager JSON description of the API in the editor window to remove the **http** protocol used to access the API.
:::image type="content" source="media/visual-studio-code-tutorial/import-demo-api.png" alt-text="Edit JSON description"::: To edit the OpenAPI format, right-click the API name in the Explorer pane and select **Edit OpenAPI**. Make your changes, and then select **File** > **Save**.
-## Apply policies to the API
+## Apply policies to the API
-API Management provides [policies](api-management-policies.md) you can configure for your APIs. Policies are a collection of statements that are executed sequentially on the request or response of an API. Policies can be global, which apply to all APIs in your API Management instance, or they can be scoped to a specific API or API operation.
+API Management provides [policies](api-management-policies.md) that you can configure for your APIs. Policies are a collection of statements. These statements are run sequentially on the request or response of an API. Policies can be global, which apply to all APIs in your API Management instance, or specific to a product, an API, or an API operation.
-This section shows how to apply some common outbound policies to your API that transform the API response. The policies in this example change response headers and hide original backend URLs that appear in the response body.
+This section shows how to apply common outbound policies to your API that transform the API response. The policies in this example change response headers and hide original backend URLs that appear in the response body.
-1. In the Explorer pane, select **Policy** under the *demo-conference-api* that you imported. The policy file opens in the editor window. This file configures policies for all operations in the API.
+1. In the Explorer pane, select **Policy** under the *demo-conference-api* that you imported. The policy file opens in the editor window. This file configures policies for all operations in the API.
1. Update the file with the following content in the `<outbound>` element: ```html
This section shows how to apply some common outbound policies to your API that t
* The first `set-header` policy adds a custom response header for demonstration purposes. * The second `set-header` policy deletes the **X-Powered-By** header, if it exists. This header can reveal the application framework used in the API backend, and publishers often remove it. * The `redirect-content-urls` policy rewrites (masks) links in the response body so that they point to the equivalent links via the API Management gateway.
-
-1. Save the file. If you are prompted, select **Upload** to upload the file to the cloud.
+
+1. Save the file. If you're prompted, select **Upload** to upload the file to the cloud.
## Test the API ### Get the subscription key
-To test the imported API you imported and the policies that are applied, you need a subscription key for your API Management instance.
+You need a subscription key for your API Management instance to test the imported API and the policies that are applied.
1. In the Explorer pane, right-click the name of your API Management instance. 1. Select **Copy Subscription Key**.
- :::image type="content" source="media/visual-studio-code-tutorial/copy-subscription-key.png" alt-text="Copy subscription key":::
+ :::image type="content" source="media/visual-studio-code-tutorial/copy-subscription-key-1.png" alt-text="Copy subscription key":::
### Test an API operation 1. In the Explorer pane, expand the **Operations** node under the *demo-conference-api* that you imported. 1. Select an operation such as *GetSpeakers*, and then right-click the operation and select **Test Operation**. 1. In the editor window, next to **Ocp-Apim-Subscription-Key**, replace `{{SubscriptionKey}}` with the subscription key that you copied.
-1. Select **Send request**.
+1. Select **Send request**.
:::image type="content" source="media/visual-studio-code-tutorial/test-api.png" alt-text="Send API request from Visual Studio Code":::
When the request succeeds, the backend responds with **200 OK** and some data.
:::image type="content" source="media/visual-studio-code-tutorial/test-api-policies.png" alt-text="API test operation"::: Notice the following details in the response:+ * The **Custom** header is added to the response. * The **X-Powered-By** header doesn't appear in the response. * URLs to the API backend are redirected to the API Management gateway, in this case `https://apim-hello-world.azure-api.net/demo-conference-api`. ### Trace the API operation
-For detailed tracing information to help you debug the API operation, select the link that appears next to **Ocp-APIM-Trace-Location**.
+For detailed tracing information to help you debug the API operation, select the link that appears next to **Ocp-APIM-Trace-Location**.
-The JSON file at that location contains Inbound, Backend, and Outbound trace information so you can determine where any problems occur after the request is made.
+The JSON file at that location contains Inbound, Backend, and Outbound trace information. The trace information helps you determine where problems occur after the request is made.
> [!TIP] > When you test API operations, the API Management Extension allows optional [policy debugging](api-management-debug-policies.md) (available in the Developer service tier).
When no longer needed, remove the API Management instance by right-clicking and
Alternately, you can select **Delete API Management** to only delete the API Management instance (this operation doesn't delete its resource group). ## Next steps
-This tutorial introduced several features of the API Management Extension for Visual Studio Code that you can use to import and manage APIs. You learned how to:
+This tutorial introduced several features of the API Management Extension for Visual Studio Code. You can use these features to import and manage APIs. You learned how to:
> [!div class="checklist"] > * Import an API into API Management
This tutorial introduced several features of the API Management Extension for Vi
> * Apply API Management policies > * Test the API
-The API Management Extension provides additional features to work with your APIs. For example, [debug polices](api-management-debug-policies.md) (available in the Developer service tier), or create and manage [named values](api-management-howto-properties.md).
+The API Management Extension provides more features to work with your APIs. For example, [debug polices](api-management-debug-policies.md) (available in the Developer service tier), or create and manage [named values](api-management-howto-properties.md).
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-key-vault-references.md
If a version is not specified in the reference, then the app will use the latest
Key Vault references can be used as values for [Application Settings](configure-common.md#configure-app-settings), allowing you to keep secrets in Key Vault instead of the site config. Application Settings are securely encrypted at rest, but if you need secret management capabilities, they should go into Key Vault.
-To use a Key Vault reference for an [application setting](configure-common.md#add-or-edit), set the reference as the value of the setting. Your app can reference the secret through its key as normal. No code changes are required.
+To use a Key Vault reference for an [app setting](configure-common.md#configure-app-settings), set the reference as the value of the setting. Your app can reference the secret through its key as normal. No code changes are required.
> [!TIP] > Most application settings using Key Vault references should be marked as slot settings, as you should have separate vaults for each environment.
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-common.md
Title: Configure apps in the portal
-description: Learn to configure common settings for an App Service app in the Azure portal. App settings, app config, connection strings, platform, language stack, container, etc.
+ Title: Configure apps
+description: Learn to configure common settings for an App Service app. App settings, app config, connection strings, platform, language stack, container, etc.
keywords: azure app service, web app, app settings, environment variables ms.assetid: 9af8a367-7d39-4399-9941-b80cbc5f39a0
Last updated 12/07/2020
-# Configure an App Service app in the Azure portal
+# Configure an App Service app
-This article explains how to configure common settings for web apps, mobile back end, or API app using the [Azure portal].
+This article explains how to configure common settings for web apps, mobile back end, or API app.
## Configure app settings
-In App Service, app settings are variables passed as environment variables to the application code. For Linux apps and custom containers, App Service passes app settings to the container using the `--env` flag to set the environment variable in the container. In either case, they're injected into your app environment at app startup. When you add, remove, or edit app settings, App Service triggers an app restart. App setting names can't contain periods (`.`). If an app setting contains a period, the period is replaced with an underscore in the container.
-
-In the [Azure portal], search for and select **App Services**, and then select your app.
-
-![Search for App Services](./media/configure-common/search-for-app-services.png)
-
-In the app's left menu, select **Configuration** > **Application settings**.
-
-![Application Settings](./media/configure-common/open-ui.png)
+In App Service, app settings are variables passed as environment variables to the application code. For Linux apps and custom containers, App Service passes app settings to the container using the `--env` flag to set the environment variable in the container. In either case, they're injected into your app environment at app startup. When you add, remove, or edit app settings, App Service triggers an app restart.
For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in `<appSettings>` in *Web.config* or *appsettings.json*, but the values in App Service override the ones in *Web.config* or *appsettings.json*. You can keep development settings (for example, local MySQL password) in *Web.config* or *appsettings.json* and production secrets (for example, Azure MySQL database password) safely in App Service. The same code uses your development settings when you debug locally, and it uses your production secrets when deployed to Azure.
App settings are always encrypted when stored (encrypted-at-rest).
> [!NOTE] > App settings can also be resolved from [Key Vault](../key-vault/index.yml) using [Key Vault references](app-service-key-vault-references.md).
-### Show hidden values
+# [Azure portal](#tab/portal)
-By default, values for app settings are hidden in the portal for security. To see a hidden value of an app setting, click the **Value** field of that setting. To see the values of all app settings, click the **Show value** button.
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
-### Add or edit
+ ![Search for App Services](./media/configure-common/search-for-app-services.png)
-To add a new app setting, click **New application setting**. In the dialog, you can [stick the setting to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
+1. In the app's left menu, select **Configuration** > **Application settings**.
-To edit a setting, click the **Edit** button on the right side.
+ ![Application Settings](./media/configure-common/open-ui.png)
-When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+ By default, values for app settings are hidden in the portal for security. To see a hidden value of an app setting, click its **Value** field. To see the hidden values of all app settings, click the **Show value** button.
-> [!NOTE]
-> In a default Linux app service or a custom Linux container, any nested JSON key structure in the app setting name like `ApplicationInsights:InstrumentationKey` needs to be configured in App Service as `ApplicationInsights__InstrumentationKey` for the key name. In other words, any `:` should be replaced by `__` (double underscore).
->
+1. To add a new app setting, click **New application setting**. To edit a setting, click the **Edit** button on the right side.
+
+1. In the dialog, you can [stick the setting to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
+
+ App setting names can't contain periods (`.`). If an app setting contains a period, the period is replaced with an underscore in the container.
+
+ > [!NOTE]
+ > In a default Linux app service or a custom Linux container, any nested JSON key structure in the app setting name like `ApplicationInsights:InstrumentationKey` needs to be configured in App Service as `ApplicationInsights__InstrumentationKey` for the key name. In other words, any `:` should be replaced by `__` (double underscore).
+ >
+
+1. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+
+# [Azure CLI](#tab/cli)
+
+Add or edit an app setting with [az webapp config app settings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set):
+
+```azurecli-interactive
+az webapp config appsettings set --name <app-name> --resource-group <group-name> --settings <setting-name>="<value>"
+```
+
+Replace `<setting-name>` with the name of the setting, and `<value>` with the value to assign to it.
+
+Show all settings and their values with [az webapp config appsettings list](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_list):
+
+```azurecli-interactive
+az webapp config appsettings list --name <app-name> --resource-group <group-name>
+```
+
+Remove one or more settings with [az webapp config app settings delete](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_delete):
+
+```azurecli-interactive
+az webapp config appsettings delete --name <app-name> --resource-group <group-name> --setting-names {<setting-name1>,<setting-name2>,...}
+```
-### Edit in bulk
+# [Azure PowerShell](#tab/ps)
-To add or edit app settings in bulk, click the **Advanced edit** button. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+Set one or more app settings with [Set-AzWebApp](/powershell/module/az.websites/set-azwebapp):
+
+```azurepowershell-interactive
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"<setting-name1>"="<value1>"; "<setting-name2>"="<value2>";...}
+```
+
+This cmdlet replaces the entire set of app settings with the ones you specify. To add or edit an app setting within an existing set, include the existing app settings in your input Hashtable by using the [Get-AzWebApp](/powershell/module/az.websites/get-azwebapp) cmdlet. For example:
+
+```azurepowershell-interactive
+# Get app configuration
+$webapp=Get-AzWebApp -ResourceGroupName <group-name> -Name <app-name>
+
+# Copy app settings to a new Hashtable
+$appSettings = @{}
+ForEach ($item in $webapp.SiteConfig.AppSettings) {
+$appSettings[$item.Name] = $item.Value
+}
+
+# Add or edit one or more app settings
+$appSettings['<setting-name1>'] = '<value1>'
+$appSettings['<setting-name2>'] = '<value2>'
+
+# Save changes
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings $appSettings
+```
+
+To check if an app setting is slot-specific, use [Get-AzWebAppSlotConfigName](/powershell/module/az.websites/get-azwebappslotconfigname):
+
+```azurepowershell-interactive
+Get-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> | select AppSettingNames
+```
+
+To make one or more app settings slot-specific, use [Set-AzWebAppSlotConfigName](/powershell/module/az.websites/set-azwebappslotconfigname):
+
+```azurepowershell-interactive
+Set-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> -AppSettingNames <setting-name1>,<setting-name2>,...
+```
+
+--
+
+### Edit app settings in bulk
+
+# [Azure portal](#tab/portal)
+
+Click the **Advanced edit** button. Edit the settings in the text area. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
App settings have the following JSON formatting:
App settings have the following JSON formatting:
] ```
-### Automate app settings with the Azure CLI
+# [Azure CLI](#tab/cli)
-You can use the Azure CLI to create and manage settings from the command line.
+Run [az webapp config app settings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) with the name of the JSON file.
-- Assign a value to a setting with [az webapp config app settings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set):
+```azurecli-interactive
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings "@fileName.json"
+```
- ```azurecli-interactive
- az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings <setting-name>="<value>"
- ```
-
- Replace `<setting-name>` with the name of the setting, and `<value>` with the value to assign to it. This command creates the setting if it doesn't already exist.
-
-- Show all settings and their values with [az webapp config appsettings list](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_list):
-
- ```azurecli-interactive
- az webapp config appsettings list --name <app-name> --resource-group <resource-group-name>
- ```
-
-- Remove one or more settings with [az webapp config app settings delete](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_delete):
+> [!TIP]
+> Wrapping the file name with quotes is only required in PowerShell.
+
+ The file format needed is a JSON array of settings where the slot setting field is optional. For example:
- ```azurecli-interactive
- az webapp config appsettings delete --name <app-name> --resource-group <resource-group-name> --setting-names {<names>}
- ```
+```json
+[
+ {
+ "name": "key1",
+ "slotSetting": false,
+ "value": "value1"
+ },
+ {
+ "name": "key2",
+ "value": "value2"
+ }
+]
+```
+
+For convenience, you can save existing settings into a JSON file with [az webapp config appsettings list](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_list). The following example can be run in Bash.
- Replace `<names>` with a space-separated list of setting names.
+```azurecli-interactive
+# Save the settings
+az webapp config appsettings list --name <app-name> --resource-group <group-name> > settings.json
+
+# Edit the JSON file
+...
+
+# Update the app with the JSON file
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings @settings.json
+```
+
+# [Azure PowerShell](#tab/ps)
+
+It's not possible to edit app settings in bulk by using a JSON file with Azure PowerShell.
+
+--
## Configure connection strings
Connection strings are always encrypted when stored (encrypted-at-rest).
> [!NOTE] > Connection strings can also be resolved from [Key Vault](../key-vault/index.yml) using [Key Vault references](app-service-key-vault-references.md).
-### Show hidden values
+# [Azure portal](#tab/portal)
+
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
-By default, values for connection strings are hidden in the portal for security. To see a hidden value of a connection string, just click the **Value** field of that string. To see the values of all connection strings, click the **Show value** button.
+ ![Search for App Services](./media/configure-common/search-for-app-services.png)
-### Add or edit
+1. In the app's left menu, select **Configuration** > **Application settings**.
-To add a new connection string, click **New connection string**. In the dialog, you can [stick the connection string to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
+ ![Application Settings](./media/configure-common/open-ui.png)
-To edit a setting, click the **Edit** button on the right side.
+ By default, values for connection strings are hidden in the portal for security. To see a hidden value of a connection string, click its **Value** field. To see the hidden values of all connection strings, click the **Show value** button.
-When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+1. To add a new connection string, click **New connection string**. To edit a connection string, click the **Edit** button on the right side.
-### Edit in bulk
+1. In the dialog, you can [stick the connection string to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
-To add or edit connection strings in bulk, click the **Advanced edit** button. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+1. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+
+# [Azure CLI](#tab/cli)
+
+Add or edit an app setting with [az webapp config connection-string set](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_set):
+
+```azurecli-interactive
+az webapp config connection-string set --name <app-name> --resource-group <group-name> --connection-string-type <type> --settings <string-name>='<value>'
+```
+
+Replace `<string-name>` with the name of the connection string, and `<value>` with the value to assign to it. For possible values of `<type>` (for example, `SQLAzure`), see the [CLI command documentation](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_set).
+
+Show all connection strings and their values with [az webapp config connection-string list](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_list):
+
+```azurecli-interactive
+az webapp config connection-string list --name <app-name> --resource-group <group-name>
+```
+
+Remove one or more connection strings with [az webapp config connection-string delete](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_delete):
+
+```azurecli-interactive
+az webapp config connection-string delete --name <app-name> --resource-group <group-name> --setting-names {<string-name1>,<string-name2>,...}
+```
+
+# [Azure PowerShell](#tab/ps)
+
+Set one or more connection strings with [Set-AzWebApp](/powershell/module/az.websites/set-azwebapp):
+
+```azurepowershell-interactive
+$PropertiesObject = @{
+ "<string-name1>" = @{
+ value="<connection-string1>";
+ type="<type>"};
+ "<string-name2>" = @{
+ value="<connection-string2>";
+ type="<type>"}
+}
+
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -ConnectionStrings $PropertiesObject
+```
+
+Each connection string contains a name (`<string-name1>`), a value (`<connection-string1>`), and the type, which is a numerical value that corresponds to one of the [ConnectionStringType](/dotnet/api/microsoft.azure.management.websites.models.connectionstringtype) enum fields. For example, For SQL Azure, specify `type="2"`.
+
+This cmdlet replaces the entire set of connection strings with the ones you specify. To add or edit an app setting within an existing set, include the existing app settings in your input hashtable by using the [Get-AzWebApp](/powershell/module/az.websites/get-azwebapp) cmdlet. For example:
+
+```azurepowershell-interactive
+# Get app configuration
+$webapp=Get-AzWebApp -ResourceGroupName <group-name> -Name <app-name>
+
+# Copy connection strings to a new hashtable
+$connStrings = @{}
+ForEach ($item in $webapp.SiteConfig.ConnectionStrings) {
+$connStrings[$item.Name] = @{value=$item.Value; type=item.Type}
+}
+
+# Add or edit one or more connection strings
+$connStrings['<string-name1>'] = @{value='<connection-string1>', type='<type>'}
+$connStrings['<string-name2>'] = @{value='<connection-string2>', type='<type>'}
+
+# Save changes
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -ConnectionStrings $connStrings
+```
+
+To check if a connection string is slot-specific, use [Get-AzWebAppSlotConfigName](/powershell/module/az.websites/get-azwebappslotconfigname):
+
+```azurepowershell-interactive
+Get-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> | select ConnectionStringNames
+```
+
+To make one or more connection strings slot-specific, use [Set-AzWebAppSlotConfigName](/powershell/module/az.websites/set-azwebappslotconfigname):
+
+```azurepowershell-interactive
+Set-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> -ConnectionStringNames <string-name1>,<string-name2>,...
+```
+
+--
+
+### Edit app settings in bulk
+
+# [Azure portal](#tab/portal)
+
+Click the **Advanced edit** button. Edit the settings in the text area. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
Connection strings have the following JSON formatting:
Connection strings have the following JSON formatting:
] ```
+# [Azure CLI](#tab/cli)
+
+Run [az webapp config connection-string set](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_set) with the name of the JSON file.
+
+```azurecli-interactive
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings "@fileName.json"
+```
+
+> [!TIP]
+> Wrapping the file name with quotes is only required in PowerShell.
+
+The file format needed is a JSON array of settings where the slot setting field is optional. For example:
+
+```json
+[
+ {
+ "name": "name-1",
+ "value": "conn-string-1",
+ "type": "SQLServer",
+ "slotSetting": false
+ },
+ {
+ "name": "name-2",
+ "value": "conn-string-2",
+ "type": "PostgreSQL",
+ },
+ ...
+]
+```
+
+For convenience, you can save existing connection strings into a JSON file with [az webapp config connection-string list](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_list). The following example can be run in Bash.
+
+```azurecli-interactive
+# Save the connection strings
+az webapp config connection-string list --resource-group <group-name> --name <app-name> > settings.json
+
+# Edit the JSON file
+...
+
+# Update the app with the JSON file
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings @settings.json
+```
+
+# [Azure PowerShell](#tab/ps)
+
+It's not possible to edit connection strings in bulk by using a JSON file with Azure PowerShell.
+
+--
+ <a name="platform"></a>+
+## Configure language stack settings
+
+- [ASP.NET Core](configure-language-dotnetcore.md)
+- [Node.js](configure-language-nodejs.md)
+- [PHP](configure-language-php.md)
+- [Python](configure-language-python.md)
+- [Java](configure-language-java.md)
+- [Ruby](configure-language-ruby.md)
+ <a name="alwayson"></a> ## Configure general settings
+# [Azure portal](#tab/portal)
+ In the [Azure portal], search for and select **App Services**, and then select your app. In the app's left menu, select **Configuration** > **General settings**. ![General settings](./media/configure-common/open-general.png)
Here, you can configure some common settings for the app. Some settings require
- **Always On**: Keeps the app loaded even when there's no traffic. When **Always On** is not turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When **Always On** is turned on, the front-end load balancer sends a GET request to the application root every five minutes. The continuous ping prevents the app from being unloaded. Always On is required for continuous WebJobs or for WebJobs that are triggered using a CRON expression.
- - **Managed pipeline version**: The IIS [pipeline mode]. Set it to **Classic** if you have a legacy app that requires an older version of IIS.
- **HTTP version**: Set to **2.0** to enable support for [HTTPS/2](https://wikipedia.org/wiki/HTTP/2) protocol. > [!NOTE] > Most modern browsers support HTTP/2 protocol over TLS only, while non-encrypted traffic continues to use HTTP/1.1. To ensure that client browsers connect to your app with HTTP/2, secure your custom DNS name. For more information, see [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
Here, you can configure some common settings for the app. Some settings require
- **Debugging**: Enable remote debugging for [ASP.NET](troubleshoot-dotnet-visual-studio.md#remotedebug), [ASP.NET Core](/visualstudio/debugger/remote-debugging-azure), or [Node.js](configure-language-nodejs.md#debug-remotely) apps. This option turns off automatically after 48 hours. - **Incoming client certificates**: require client certificates in [mutual authentication](app-service-web-configure-tls-mutual-auth.md).
+# [Azure CLI](#tab/cli)
+
+You can set many of the common configurable options using [az webapp config set](/cli/azure/webapp/config#az_webapp_config_set). The following example shows a subset of the configurable options.
+
+```azurecli-interactive
+az webapp config set --resource-group <group-name> --name <app-name> --use-32bit-worker-process [true|false] --web-sockets-enabled [true|false] --always-on [true|false]--http20-enabled --auto-heal-enabled [true|false] --remote-debugging-enabled [true|false] --number-of-workers
+```
+
+To show the existing settings, use the [az webapp config show](/cli/azure/webapp/config#az_webapp_config_show) command.
+
+# [Azure PowerShell](#tab/ps)
+
+You can set many of the common configurable options using [Set-AzWebApp](/powershell/module/az.websites/set-azwebapp). The following example shows a subset of the configurable options.
+
+```azurecli-interactive
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -Use32BitWorkerProcess [True|False] -WebSocketsEnabled [True|False] -AlwaysOn [True|False] -NumberOfWorkers
+```
+
+To show the existing settings, use the [Get-AzWebApp](/powershell/module/az.websites/get-azwebapp) command.
+
+--
## Configure default documents This setting is only for Windows apps.
-In the [Azure portal], search for and select **App Services**, and then select your app. In the app's left menu, select **Configuration** > **Default documents**.
+The default document is the web page that's displayed at the root URL of an App Service app. The first matching file in the list is used. If the app uses modules that route based on URL instead of serving static content, there is no need for default documents.
-![Default documents](./media/configure-common/open-documents.png)
+# [Azure portal](#tab/portal)
-The default document is the web page that's displayed at the root URL for a website. The first matching file in the list is used. To add a new default document, click **New document**. Don't forget to click **Save**.
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
+1. In the app's left menu, select **Configuration** > **Default documents**.
-If the app uses modules that route based on URL instead of serving static content, there is no need for default documents.
+ ![Default documents](./media/configure-common/open-documents.png)
-## Configure path mappings
+1. To add a default document, click **New document**. To remove a default document, click **Delete** to its right.
-In the [Azure portal], search for and select **App Services**, and then select your app. In the app's left menu, select **Configuration** > **Path mappings**.
+# [Azure CLI](#tab/cli)
-![Path mappings](./media/configure-common/open-path.png)
+Add a default document by using [az resource update](/cli/azure/resource#az_resource_update):
-> [!NOTE]
-> The **Path mappings** tab may display OS-specific settings that differ from the example shown here.
+```azurecli-interactive
+az resource update --resource-group <group-name> --resource-type "Microsoft.Web/sites/config" --name <app-name>/config/web --add properties.defaultDocuments <filename>
+```
-### Windows apps (uncontainerized)
+# [Azure PowerShell](#tab/ps)
-For Windows apps, you can customize the IIS handler mappings and virtual applications and directories.
+Add a default document by modifying the updating app's PowerShell object:
-Handler mappings let you add custom script processors to handle requests for specific file extensions. To add a custom handler, click **New handler mapping**. Configure the handler as follows:
+```azurepowershell-interactive
+$webapp = Get-AzWebApp -ResourceGroupName <group-name> -Name <app-name>
+$webapp.SiteConfig.DefaultDocuments.Add("<filename>")
+Set-AzWebApp $webapp
+```
-- **Extension**. The file extension you want to handle, such as *\*.php* or *handler.fcgi*.-- **Script processor**. The absolute path of the script processor to you. Requests to files that match the file extension are processed by the script processor. Use the path `D:\home\site\wwwroot` to refer to your app's root directory.-- **Arguments**. Optional command-line arguments for the script processor.
+--
-<a name="redirect-to-custom-directory" aria-hidden="true"></a>
+<a name="redirect-to-a-custom-directory" aria-hidden="true"></a>
-### Redirect to a custom directory
+## Map a URL path to a directory
-> [!NOTE]
-> By default, App Service directs web requests to the root directory of your app code. But certain web frameworks don't start in the root directory. For example, [Laravel](https://laravel.com/) starts in the `public` subdirectory. To continue the `contoso.com` DNS example, such an app is accessible at `http://contoso.com/public`, but you typically want to direct `http://contoso.com` to the `public` directory instead.
+By default, App Service starts your app from the root directory of your app code. But certain web frameworks don't start in the root directory. For example, [Laravel](https://laravel.com/) starts in the `public` subdirectory. Such an app would be accessible at `http://contoso.com/public`, for example, but you typically want to direct `http://contoso.com` to the `public` directory instead. If your app's startup file is in a different folder, or if your repository has more than one application, you can edit or add virtual applications and directories.
-Each app has the default root path (`/`) mapped to `D:\home\site\wwwroot`, where your code is deployed by default. If your app root is in a different folder, or if your repository has more than one application, you can edit or add virtual applications and directories here.
+# [Azure portal](#tab/portal)
-From the **Path mappings** tab, click **New virtual application or directory**.
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
+1. In the app's left menu, select **Configuration** > **Path mappings**
+1. Click **New virtual application or directory**.
-- To map a virtual directory to a physical path, leave the **Directory** check box selected. Specify the virtual directory and the corresponding relative (physical) path to the website root (`D:\home`).-- To mark a virtual directory as a web application, clear the **Directory** check box.
-
- ![Directory check box](./media/configure-common/directory-check-box.png)
+ - To map a virtual directory to a physical path, leave the **Directory** check box selected. Specify the virtual directory and the corresponding relative (physical) path to the website root (`D:\home`).
+ - To mark a virtual directory as a web application, clear the **Directory** check box.
+
+ ![Directory check box](./media/configure-common/directory-check-box.png)
-### Containerized apps
+1. Click **OK**.
-You can [add custom storage for your containerized app](configure-connect-to-azure-storage.md). Containerized apps include all Linux apps and also the Windows and Linux custom containers running on App Service. Click **New Azure Storage Mount** and configure your custom storage as follows:
+# [Azure CLI](#tab/cli)
-- **Name**: The display name.-- **Configuration options**: **Basic** or **Advanced**.-- **Storage accounts**: The storage account with the container you want.-- **Storage type**: **Azure Blobs** or **Azure Files**.
- > [!NOTE]
- > Windows custom containers only support Azure Files.
-- **Storage container**: For basic configuration, the container you want.-- **Share name**: For advanced configuration, the file share name.-- **Access key**: For advanced configuration, the access key.-- **Mount path**: The absolute path in your container to mount the custom storage.
+The following example sets the root path `/` to the `public` subdirectory (which works for Laravel), and also adds a second virtual application at the `/app2` path. To run it, change `<group-name>` and `<app-name>`.
-For more information, see [Access Azure Storage as a network share from a container in App Service](configure-connect-to-azure-storage.md).
+```azurecli-interactive
+echo -n '[
+ {
+ "physicalPath"':' "site\\wwwroot\\public",
+ "preloadEnabled"':' false,
+ "virtualDirectories"':' null,
+ "virtualPath"':' "/"
+ },
+ {
+ "physicalPath"':' "site\\wwwroot\\app2",
+ "preloadEnabled"':' false,
+ "virtualDirectories"':' null,
+ "virtualPath"':' "/app2"
+ }
+ ]' > json.txt
-## Configure language stack settings
+json=$(cat json.txt)
-- [ASP.NET Core](configure-language-dotnetcore.md)-- [Node.js](configure-language-nodejs.md)-- [PHP](configure-language-php.md)-- [Python](configure-language-python.md)-- [Java](configure-language-java.md)-- [Ruby](configure-language-ruby.md)
+az resource update --resource-group <group-name> --resource-type Microsoft.Web/sites/config --name <app-name>/config/web --set properties.virtualApplications="$json"
+```
+
+# [Azure PowerShell](#tab/ps)
+
+The following example sets the root path `/` to the `public` subdirectory (which works for Laravel), and also adds a second virtual application at the `/app2` path. To run it, change `<group-name>` and `<app-name>`.
+
+```azurepowershell-interactive
+$webapp=Get-AzWebApp -ResourceGroupName <group-name> -Name <app-name>
+
+# Set default / path to public subdirectory
+$webapp.SiteConfig.VirtualApplications[0].PhysicalPath= "site\wwwroot\public"
+
+# Add a virtual application
+$virtualApp = New-Object Microsoft.Azure.Management.WebSites.Models.VirtualApplication
+$virtualApp.VirtualPath = "/app2"
+$virtualApp.PhysicalPath = "site\wwwroot\app2"
+$virtualApp.PreloadEnabled = $false
+$webapp.SiteConfig.VirtualApplications.Add($virtualApp)
+
+# Save settings
+Set-AzWebApp $webapp
+```
+
+--
+
+## Configure handler mappings
+
+For Windows apps, you can customize the IIS handler mappings and virtual applications and directories. Handler mappings let you add custom script processors to handle requests for specific file extensions.
+
+To add a custom handler:
+
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
+1. In the app's left menu, select **Configuration** > **Path mappings**.
+
+ ![Path mappings](./media/configure-common/open-path.png)
+
+1. Click **New handler mapping**. Configure the handler as follows:
+
+ - **Extension**. The file extension you want to handle, such as *\*.php* or *handler.fcgi*.
+ - **Script processor**. The absolute path of the script processor to you. Requests to files that match the file extension are processed by the script processor. Use the path `D:\home\site\wwwroot` to refer to your app's root directory.
+ - **Arguments**. Optional command-line arguments for the script processor.
+
+1. Click **OK**.
## Configure custom containers
-See [Configure a custom Linux container for Azure App Service](configure-custom-container.md)
+- [Configure a custom container for Azure App Service](configure-custom-container.md)
+- [Add custom storage for your containerized app](configure-connect-to-azure-storage.md)
## Next steps
app-service Troubleshoot Intermittent Outbound Connection Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-intermittent-outbound-connection-errors.md
# Troubleshooting intermittent outbound connection errors in Azure App Service
-This article helps you troubleshoot intermittent connection errors and related performance issues in [Azure App Service](./overview.md). This topic will provide more information on, and troubleshooting methodologies for, exhaustion of source address network translation (SNAT) ports. If you require more help at any point in this article, contact the Azure experts at the [MSDN Azure and the Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, file an Azure support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
+This article helps you troubleshoot intermittent connection errors and related performance issues in [Azure App Service](./overview.md). This topic will provide more information on, and troubleshooting methodologies for, exhaustion of source network address translation (SNAT) ports. If you require more help at any point in this article, contact the Azure experts at the [MSDN Azure and the Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, file an Azure support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
## Symptoms
If SNAT ports are exhausted, where WebJobs are unable to connect to SQL Database
## Additional information * [SNAT with App Service](https://4lowtherabbit.github.io/blogs/2019/10/SNAT/)
-* [Troubleshoot slow app performance issues in Azure App Service](./troubleshoot-performance-degradation.md)
+* [Troubleshoot slow app performance issues in Azure App Service](./troubleshoot-performance-degradation.md)
application-gateway Application Gateway Ilb Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-ilb-arm.md
Title: Use with Internal Load Balancer - Azure Application Gateway
-description: This page provides instructions to create, configure, start, and delete an Azure application gateway with internal load balancer (ILB) for Azure Resource Manager
+description: This article provides instructions to create, configure, start, and delete an Azure application gateway with internal load balancer (ILB)
Previously updated : 11/13/2019 Last updated : 01/11/2022 # Create an application gateway with an internal load balancer (ILB)
-Azure Application Gateway can be configured with an Internet-facing VIP or with an internal endpoint that is not exposed to the Internet, also known as an internal load balancer (ILB) endpoint. Configuring the gateway with an ILB is useful for internal line-of-business applications that are not exposed to the Internet. It's also useful for services and tiers within a multi-tier application that sit in a security boundary that is not exposed to the Internet but still require round-robin load distribution, session stickiness, or Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), termination.
+Azure Application Gateway Standard v1 can be configured with an Internet-facing VIP or with an internal endpoint that is not exposed to the Internet, also known as an internal load balancer (ILB) endpoint. Configuring the gateway with an ILB is useful for internal line-of-business applications that are not exposed to the Internet. It's also useful for services and tiers within a multi-tier application that sit in a security boundary that is not exposed to the Internet but still require round-robin load distribution, session stickiness, or Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), termination.
-This article walks you through the steps to configure an application gateway with an ILB.
+This article walks you through the steps to configure a Standard v1 Application Gateway with an ILB.
## Before you begin
application-gateway Configure Application Gateway With Private Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md
Title: Configure an internal load balancer (ILB) endpoint
-description: This article provides information on how to configure Application Gateway with a private frontend IP address
+description: This article provides information on how to configure Application Gateway Standard v1 with a private frontend IP address
Previously updated : 02/23/2021 Last updated : 01/11/2022 # Configure an application gateway with an internal load balancer (ILB) endpoint
-Azure Application Gateway can be configured with an Internet-facing VIP or with an internal endpoint that isn't exposed to the Internet. An internal endpoint uses a private IP address for the frontend, which is also known as an *internal load balancer (ILB) endpoint*.
+Azure Application Gateway Standard v1 can be configured with an Internet-facing VIP or with an internal endpoint that isn't exposed to the Internet. An internal endpoint uses a private IP address for the frontend, which is also known as an *internal load balancer (ILB) endpoint*.
Configuring the gateway using a frontend private IP address is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers within a multi-tier application that are in a security boundary that isn't exposed to the Internet but:
Configuring the gateway using a frontend private IP address is useful for intern
- session stickiness - or Transport Layer Security (TLS) termination (previously known as Secure Sockets Layer (SSL)).
-This article guides you through the steps to configure an application gateway with a frontend private IP address using the Azure portal.
+This article guides you through the steps to configure a Standard v1 Application Gateway with an ILB using the Azure portal.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
Let's get started:
1. Select **Review + Create**.
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-two.png" alt-text="Still image showing the correct values for creating Form Recognizer resource.":::
+ :::image type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-two.png" alt-text="Still image showing the correct values for creating Form Recognizer resource.":::
1. Azure will run a quick validation check, after a few seconds you should see a green banner that says **Validation Passed**.
Let's get started:
1. After you select create, you'll be redirected to a new page that says **Deployment in progress**. After a few seconds, you'll see a message that says, **Your deployment is complete**. + ## Get Endpoint URL and API keys 1. Once you receive the *deployment is complete* message, select the **Go to resource** button.
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-three.gif" alt-text="Gif showing the validation process of creating Form Recognizer resource.":::
- 1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API. 1. If your overview page does not have the keys and endpoint visible, you can select the **Keys and Endpoint** button on the left navigation bar and retrieve them there.
applied-ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/tutorial-logic-apps.md
Previously updated : 01/06/2022 Last updated : 01/11/2022 recommendations: false #Customer intent: As a form-processing software developer, I want to learn how to use the Form Recognizer service with Logic Apps.
applied-ai-services Security How To Update Role Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/security-how-to-update-role-assignment.md
If you do NOT do this, nothing will break. The old role will continue to functio
Any new Immersive Reader resources you create with our script at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) will automatically use the new role.
-## Call to Action
+## Call to action
If you created and configured an Immersive Reader resource using the instructions at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) prior to February 2022, it is advised that you perform the operation below to update the role assignment permissions on ALL of your Immersive Reader resources. The operation involves running a script to update the role assignment on a single resource. If you have multiple resources, run this script multiple times, once for each resource.
azure-cache-for-redis Cache Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-connectivity.md
Steps to check your virtual network configuration:
### Private endpoint configuration
-Steps to check your private endpoint configurtation:
+Steps to check your private endpoint configuration:
1. `Public Network Access` flag is disabled by default on creating a private endpoint. Ensure that you have set the `Public Network Access` correctly. When you have your cache in Azure portal, look under **Private Endpoint** in the Resource menu on the left for this setting. 1. If you're trying to connect to your cache private endpoint from outside your virtual network of your cache, `Public Network Access` needs to be enabled.
azure-monitor Azure Monitor Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-install.md
It is strongly recommended to update to GA+ versions listed below instead of usi
| July 2021 | <ul><li>Support for direct proxies</li><li>Support for Log Analytics gateway</li></ul> [Learn more](https://azure.microsoft.com/updates/general-availability-azure-monitor-agent-and-data-collection-rules-now-support-direct-proxies-and-log-analytics-gateway/) | 1.1.1.0 | 1.10.5.0 | | August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>1</sup> | | September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Addressed regression introduced in 1.1.3.1<sup>2</sup> for Arc Windows servers</li></ul> | 1.1.3.2 | 1.12.2.0 <sup>2</sup> |
-| December 2021 | Fixed issues impacting Linux Arc-enabled servers | N/A | 1.14.7.0 |
+| December 2021 | Fixed issues impacting Linux Arc-enabled servers | N/A | 1.14.7.0<sup>3</sup> |
-<sup>1</sup> Do not use AMA Linux version 1.10.7.0
-<sup>2</sup> Known regression where it's not working on Arc-enabled servers
+<sup>1</sup> Do not use AMA Linux version 1.10.7.0
+<sup>2</sup> Known regression where it's not working on Arc-enabled servers
+<sup>3</sup> Bug identified wherein Linux performance counters data stops flowing on restarting/rebooting the machine(s). Fix underway and will be available in next monthly version update.
## Install with Azure portal
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
By default, two data types - `Usage` and `AzureActivity` - are retained for a mi
Data types from workspace-based Application Insights resources (`AppAvailabilityResults`, `AppBrowserTimings`, `AppDependencies`, `AppExceptions`, `AppEvents`, `AppMetrics`, `AppPageViews`, `AppPerformanceCounters`, `AppRequests`, `AppSystemEvents`, and `AppTraces`) are also retained for 90 days at no charge by default. Their retention can be adjusted using the retention by data type functionality.
-The Log Analytics [purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing and is intended to be used for very limited cases. To reduce your retention bill, the retention period must be reduced either for the workspace or for specific data types. Learn more about managing [personal data stored in Log Analytics and Application Insights](./personal-data-mgmt.md).
+> [!TIP]
+> The Log Analytics [purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing and is intended to be used for very limited cases. **To reduce your retention bill, the retention period must be reduced either for the workspace or for specific data types.** Learn more about managing [personal data stored in Log Analytics and Application Insights](./personal-data-mgmt.md).
### Retention by data type
azure-netapp-files Application Volume Group Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/application-volume-group-considerations.md
na Previously updated : 11/19/2021 Last updated : 01/11/2022 # Requirements and considerations for application volume group for SAP HANA
This article describes the requirements and considerations you need to be aware
To deploy SAP HANA volumes using the application volume group, you need to use your HANA database VMs as an anchor for a proximity placement group (PPG). ItΓÇÖs recommended that you create an availability set per database and use the **[SAP HANA VM pinning request form](https://aka.ms/HANAPINNING)** to pin the availability set to a dedicated compute cluster. After pinning, you need to add a PPG to the availability set and then deploy all hosts of an SAP HANA database using that availability set. Doing so ensures that all virtual machines are at the same location. If the virtual machines are started, the PPG has its anchor.
-> [!NOTE]
-> The PPG must be in the same resource group as the capacity pool you want to use for the SAP HANA volumes.
+> [!IMPORTANT]
+> If you have requested Azure NetApp Files SAP HANA volume pinning before the application volume group was available, you should remove the pinning for your subscription. Existing pinning for a subscription might impact the application volume group deployment and might result in a failure.
When using a PPG without a pinned availability set, a PPG would lose its anchor if all the virtual machines in that PPG are stopped. When the virtual machines are restarted, they might be started in a different location, which can result in a latency increase because the volumes created with the application volume group will not be moved.
azure-netapp-files Azure Netapp Files Create Netapp Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-netapp-account.md
You must have registered your subscription for using the NetApp Resource Provide
The NetApp account you created now appears in the Azure NetApp Files blade. > [!NOTE]
-> If you have not been granted access to the Azure NetApp Files service, you will receive the following error when you try to create the first NetApp account:
+> If you haven't registered your subscription for using the NetApp Resource Provider, you will receive the following error when you try to create the first NetApp account:
>
-> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"InvalidResourceType\",\r\n \"message\": \"The resource type could not be found in the namespace 'Microsoft.NetApp' for api version '2017-08-15'.\"\r\n }\r\n}"}]}`
+> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"InvalidResourceType\",\r\n \"message\": \"The resource type could not be found in the namespace 'Microsoft.NetApp' for api version '20xx-xx-xx'.\"\r\n }\r\n}"}]}`
## Next steps
azure-portal Azure Portal Markdown Tile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-markdown-tile.md
Title: Use a custom markdown tile on Azure dashboards description: Learn how to add a markdown tile to an Azure dashboard to display static content Previously updated : 03/19/2021 Last updated : 01/11/2022
You can add a markdown tile to your Azure dashboards to display custom, static c
## Add a markdown tile to your dashboard
-1. Select **Dashboard** from the Azure portal sidebar.
+1. Select **Dashboard** from the Azure portal menu.
- ![Screenshot showing portal sidebar](./media/azure-portal-markdown-tile/azure-portal-nav.png)
1. In the dashboard view, select the dashboard where the custom markdown tile should appear, then select **Edit**.
You can add a markdown tile to your Azure dashboards to display custom, static c
![Screenshot showing entering URL](./media/azure-portal-markdown-tile/azure-portal-dashboard-markdown-url.png) > [!NOTE]
- > For added security, you can create a markdown file and store it in an [Azure storage account blob where encryption is enabled](../storage/common/storage-service-encryption.md), then point to the file using the URL option. The markdown content is encrypted through the storage account's encryption options. Only users with permissions to the file can see the markdown content on the dashboard. You might need to set a [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) rule on the storage account so that the Azure portal (_https://portal.azure.com/_) can access the markdown file in the blob.
+ > For added security, create a markdown file and store it in an [Azure storage account blob where encryption is enabled](../storage/common/storage-service-encryption.md). For additional control, configure the encryption with [customer-managed keys stored in Azure Key Vault](/azure/storage/common/customer-managed-keys-configure-key-vault?tabs=portal). You can then point to the file using the **Insert content using URL** option. Only users with permissions to the file can see the markdown content on the dashboard. You might need to set a [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) rule on the storage account so that the Azure portal (_https://portal.azure.com/_) can access the markdown file in the blob.
1. Select **Done** to dismiss the **Edit Markdown** pane. Your content appears on the Markdown tile, which you can resize by dragging the handle in the lower right-hand corner.
You can add a markdown tile to your Azure dashboards to display custom, static c
You can use any combination of plain text, Markdown syntax, and HTML content on the markdown tile. The Azure portal uses an open-source library called _marked_ to transform your content into HTML that is shown on the tile. The HTML produced by _marked_ is pre-processed by the portal before it's rendered. This step helps make sure that your customization won't affect the security or layout of the portal. During that pre-processing, any part of the HTML that poses a potential threat is removed. The following types of content aren't allowed by the portal:
-* JavaScript ΓÇô `<script>` tags and inline JavaScript evaluations are removed.
-* iframes - `<iframe>` tags are removed.
-* Style - `<style>` tags are removed. Inline style attributes on HTML elements aren't officially supported. You may find that some inline style elements work for you, but if they interfere with the layout of the portal, they could stop working at any time. The Markdown tile is intended for basic, static content that uses the default styles of the portal.
+- JavaScript ΓÇô `<script>` tags and inline JavaScript evaluations are removed.
+- iframes - `<iframe>` tags are removed.
+- Style - `<style>` tags are removed. Inline style attributes on HTML elements aren't officially supported. You may find that some inline style elements work for you, but if they interfere with the layout of the portal, they could stop working at any time. The Markdown tile is intended for basic, static content that uses the default styles of the portal.
## Next steps
-* To create a custom dashboard, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md)
+- Learn more about [creating dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
+- Learn how to [share a dashboard by using Azure role-based access control](azure-portal-dashboard-share-access.md).
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-create-azure-support-request.md
You can get to **Help + support** in the Azure portal. It's available from the A
To create a support request, you must be an [Owner](../../role-based-access-control/built-in-roles.md#owner), [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or be assigned to the [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor) role at the subscription level. To create a support request without a subscription, for example an Azure Active Directory scenario, you must be an [Admin](../../active-directory/roles/permissions-reference.md).
+> [!IMPORTANT]
+> If a support request requires investigation into multiple subscriptions, you must have Owner, Contributor, or Support Request Contributor role for each subscription involved.
+ ### Go to Help + support from the global header To start a support request from anywhere in the Azure portal:
Follow these links to learn more:
* [Azure support ticket REST API](/rest/api/support) * Engage with us on [Twitter](https://twitter.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
-* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
+* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-cli.md
Samples for the Azure CLI are written for the `bash` shell. To run this sample i
If you don't have Azure CLI installed, you can use Azure Cloud Shell. For more information, see [Deploy Bicep files from Azure Cloud Shell](./deploy-cloud-shell.md). + ## Deployment scope You can target your deployment to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-powershell.md
You need Azure PowerShell and to be connected to Azure:
If you don't have PowerShell installed, you can use Azure Cloud Shell. For more information, see [Deploy Bicep files from Azure Cloud Shell](./deploy-cloud-shell.md). + ## Deployment scope You can target your deployment to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
The deployment commands changed in Azure CLI version 2.2.0. The examples in this
If you don't have Azure CLI installed, you can use Azure Cloud Shell. For more information, see [Deploy ARM templates from Azure Cloud Shell](deploy-cloud-shell.md). + ## Deployment scope You can target your Azure deployment template to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cloud-shell.md
You can use [Azure Cloud Shell](../../cloud-shell/overview.md) to deploy an Azur
You can deploy to any scope. This article shows deploying to a resource group. + ## Deploy remote template To deploy an external template, provide the URI of the template exactly as you would for any external deployment. The external template could be in a GitHub repository or and an external storage account.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-powershell.md
You need to install Azure PowerShell and connect to Azure:
If you don't have PowerShell installed, you can use Azure Cloud Shell. For more information, see [Deploy ARM templates from Azure Cloud Shell](deploy-cloud-shell.md). + ## Deployment scope You can target your deployment to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-rest.md
This article explains how to use the Azure Resource Manager REST API with Azure
You can either include your template in the request body or link to a file. When using a file, it can be a local file or an external file that is available through a URI. When your template is in a storage account, you can restrict access to the template and provide a shared access signature (SAS) token during deployment. + ## Deployment scope You can target your deployment to a resource group, Azure subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy To Azure Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-azure-button.md
The deployment scope is determined by the template schema. For more information,
- [management groups](deploy-to-management-group.md) - [tenants](deploy-to-tenant.md) + ## Use common image To add the button to your web page or repository, use the following image:
azure-sql Business Continuity High Availability Disaster Recover Hadr Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview.md
SQL Database and SQL Managed Instance also provide several business continuity f
- [Temporal tables](../temporal-tables.md) enable you to restore row versions from any point in time. - [Built-in automated backups](automated-backups-overview.md) and [Point in Time Restore](recovery-using-backups.md#point-in-time-restore) enables you to restore complete database to some point in time within the configured retention period up to 35 days. - You can [restore a deleted database](recovery-using-backups.md#deleted-database-restore) to the point at which it was deleted if the **server has not been deleted**.-- [Long-term backup retention](long-term-retention-overview.md) enables you to keep the backups up to 10 years. This is in limited public preview for SQL Managed Instance.
+- [Long-term backup retention](long-term-retention-overview.md) enables you to keep the backups up to 10 years.
- [Active geo-replication](active-geo-replication-overview.md) enables you to create readable replicas and manually failover to any replica in case of a datacenter outage or application upgrade. - [Auto-failover group](auto-failover-group-overview.md#terminology-and-capabilities) allows the application to automatically recover in case of a datacenter outage.
azure-sql Elastic Pool Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-overview.md
Pooled databases generally support the same [business continuity features](busin
## Creating a new SQL Database elastic pool using the Azure portal
-There are two ways you can create an elastic pool in the Azure portal.
+You can create an elastic pool in the Azure portal in two ways:
+
+- Create an elastic pool and select an existing or new server.
+- Create an elastic pool from an existing server.
+
+To create an elastic pool and select an existing or new server:
1. Go to the [Azure portal](https://portal.azure.com) to create an elastic pool. Search for and select **Azure SQL**.
-2. Select **+Add** to open the **Select SQL deployment option** page. You can view additional information about elastic pools by selecting **Show details** on the **Databases** tile.
-3. On the **Databases** tile, select **Elastic pool** in the **Resource type** dropdown, then select **Create**:
+2. Select **Create** to open the **Select SQL deployment option** pane. To view more information about elastic pools, on the **Databases** tile, select **Show details**.
+3. On the **Databases** tile, in the **Resource type** dropdown, select **Elastic pool**, and then select **Create**.
![Create an elastic pool](./media/elastic-pool-overview/create-elastic-pool.png)
-4. Or you can create an elastic pool by navigating to an existing server and clicking **+ New pool** to create a pool directly into that server.
+To create an elastic pool from an existing server:
+
+- Go to an existing server and select **New pool** to create a pool directly in that server.
> [!NOTE] > You can create multiple pools on a server, but you can't add databases from different servers into the same pool.
azure-sql Geo Distributed Application Configure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/geo-distributed-application-configure-tutorial.md
Geo-replication settings can also be changed in the Azure portal, by selecting y
private static final String READ_WRITE_URL = String.format("jdbc:" + "sqlserver://%s.database.windows.net:1433;database=%s;user=%s;password=%s;encrypt=true;" +
- "hostNameInCertificate=*.database.windows.net;loginTimeout=30;", +
+ "hostNameInCertificate=*.database.windows.net;loginTimeout=30;",
FAILOVER_GROUP_NAME, DB_NAME, USER, PASSWORD); private static final String READ_ONLY_URL = String.format("jdbc:" + "sqlserver://%s.secondary.database.windows.net:1433;database=%s;user=%s;password=%s;encrypt=true;" +
- "hostNameInCertificate=*.database.windows.net;loginTimeout=30;", +
+ "hostNameInCertificate=*.database.windows.net;loginTimeout=30;",
FAILOVER_GROUP_NAME, DB_NAME, USER, PASSWORD); public static void main(String[] args) {
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
Previously updated : 9/9/2021 Last updated : 1/10/2022 # Hyperscale service tier
Hyperscale service tier is only available in [vCore model](service-tiers-vcore.m
- **Compute**:
- The Hyperscale compute unit price is per replica. The [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) price is applied to high-availabilty and named replicas automatically. We create a primary replica and one secondary [high-availability replica](service-tier-hyperscale-replicas.md) per Hyperscale database by default. Users may adjust the total number of high-availability replicas from 0-4, depending on the needed [SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/).
+ The Hyperscale compute unit price is per replica. The [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) price is applied to high-availabilty and named replicas automatically. Users may adjust the total number of high-availability secondary replicas from 0 to 4, depending on [SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) requirements.
- **Storage**:
- You don't need to specify the max data size when configuring a Hyperscale database. In the hyperscale tier, you're charged for storage for your database based on actual allocation. Storage is automatically allocated between 40 GB and 100 TB, in 10-GB increments. Multiple data files can grow at the same time if needed. A Hyperscale database is created with a starting size of 10 GB and it starts growing by 10 GB every 10 minutes, until it reaches the size of 40 GB.
+ You don't need to specify the max data size when configuring a Hyperscale database. In the Hyperscale tier, you're charged for storage for your database based on actual allocation. Storage is automatically allocated between 40 GB and 100 TB, in 10-GB increments. Multiple data files can grow at the same time if needed. A Hyperscale database is created with a starting size of 10 GB and it starts growing by 10 GB every 10 minutes, until it reaches the size of 40 GB.
For more information about Hyperscale pricing, see [Azure SQL Database Pricing](https://azure.microsoft.com/pricing/details/sql-database/single/)
azure-video-analyzer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/connect-classic-account-to-arm.md
In this article, we will go through options on connecting your **existing** Vide
* Unlimited paid Video Analyzer for Media account (classic account).
- * To perform the connect to ARM action, you should have owner's permissions on the Video Analyzer for Media account.
+ * To perform the connect to the ARM (Azure Resource Manager) action, you should have owner's permissions on the Video Analyzer for Media account.
* Azure Subscription. * User assigned managed identity (can be created along the flow).
-#### Transition state and recommended steps before connecting a classic account to be ARM-based
+## Transition state
-In the connect process the account management is connected to ARM, which will trigger 30 days of a transition state for the account. In that state, a connected account can be accessed through API by both access token [generated through API Management](https://aka.ms/avam-dev-portal)(classic way) or by an access token generated through ARM. The transition state, which moves all account management to be managed by ARM will disable the invite user feature from the Video Analyzer for Media portal, due to the fact account-management will be handled by [Azure RBAC][docs-rbac-overview]. This will result all invited users on this account losing their access to the Video Analyzer for Media account Media portal. Of course, this can easily be resolved by assigning the right role-assignment to all these users through Azure RBAC ([How to assign RBAC][docs-rbac-assignment]). Only the account owner, who performed the connect action, will be automatically assigned as an owner on the connected account. Additionally, when [Azure policies][docs-governance-policy] are enforced, this would override the settings on this account.
-If users won't be added through Azure RBAC to the account, after 30 days, they will lose access also through API, since after the transition state ends, no user can't generate a valid access token through APIM (classic way) anymore, but only through ARM. Making Azure RBAC the exclusive way to manage role based access control on the account.
+Connecting a classic account to be ARM-based triggers a 30 days of a transition state. In the transition state, an existing account can be accessed by generating an access token using both:
+
+* Access token [generated through API Management](https://aka.ms/avam-dev-portal)(classic way)
+* Access token [generated through ARM](/rest/api/videoindexer/generate/access-token)
+
+The transition state moves all account management functionality to be managed by ARM and will be handled by [Azure RBAC][docs-rbac-overview].
+
+The [invite users](invite-users.md) feature in the Video Analyzer for Media portal gets disabled. The invited users on this account lose their access to the Video Analyzer for Media account Media in the portal.
+However, this can be resolved by assigning the right role-assignment to these users through Azure RBAC, see [How to assign RBAC][docs-rbac-assignment].
+
+Only the account owner, who performed the connect action, is automatically assigned as the owner on the connected account. When [Azure policies][docs-governance-policy] are enforced, they override the settings on the account.
+
+If users are not added through Azure RBAC to the account after 30 days, they will lose access through API as well as Video Analyzer for Media portal.
+After the transition state ends, users will only be able to generate a valid access token through through ARM, making Azure RBAC the exclusive way to manage role-based access control on the account.
> [!NOTE]
-> If there are invited users you wish to remove their access before the end of the 30 days of transition state you should do so through the account settings on the Azure Video Analyzer for Media account settings **before** connecting the account to ARM
+> If there are invited users you wish to remove access from, do it before connecting the account to ARM.
+
+Before the end of the 30 days of transition state, you can remove access from users through the Azure Video Analyzer for Media portal on the account settings page.
## Get started ### Browse to [Video Analyzer for Media portal](https://aka.ms/vi-portal-link)
-1. Sign in using your AAD account.
+1. Sign in using your Azure AD account.
1. On the top right bar press *User account* to open the side pane account list.
-1. Select the Video Analyzer for Media classic account you wish to connect to ARM (Classic accounts will be tagged with a *classic tag*).
+1. Select the Video Analyzer for Media classic account you wish to connect to ARM (classic accounts will be tagged with a *classic tag*).
1. Click **Settings**. ![account-settings](media/connect-classic-account-to-arm/user-account-settings.png)
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-build-chat.md
First let's create an empty ASP.NET Core app.
```bash dotnet new web
- dotnet add package Azure.Messaging.WebPubSub
+ dotnet add package Microsoft.Azure.WebPubSub.AspNetCore --version 1.0.0-beta.3
``` 2. Then add `app.UseStaticFiles();` before `app.UseRouting();` in `Startup.cs` to support static files. Remove the default `endpoints.MapGet` inside `app.UseEndpoints`.
First let's create an empty ASP.NET Core app.
} app.UseStaticFiles();- app.UseRouting(); app.UseEndpoints(endpoints =>
You can test the server by running `dotnet run --urls http://localhost:8080` and
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md) the subscriber uses an API in Web PubSub SDK to generate an access token from connection string and use it to connect to the service. This is usually not safe in a real world application as connection string has high privilege to do any operation to the service so you don't want to share it with any client. Let's change this access token generation process to a REST API at server side, so client can call this API to request an access token every time it needs to connect, without need to hold the connection string.
-1. Install dependencies
- Install dependencies and use [Secret Manager](/aspnet/core/security/app-secrets#secret-manager) tool for .NET Core to set the connection string. Run the below command, replacing `<connection_string>` with the one fetched in [previous step](#get-the-connectionstring-for-future-use)
+1. Install dependencies.
```bash dotnet add package Microsoft.Extensions.Azure ```
-2. DI the service client inside `ConfigureServices` and don't forget to replace `<connection_string>` with the one of your services.
+2. Add a `SampleChatHub` class to handle hub events. And DI the service middleware and service client inside `ConfigureServices()`. Don't forget to replace `<connection_string>` with the one of your services.
```csharp public void ConfigureServices(IServiceCollection services) {
- services.AddAzureClients(builder =>
+ services.AddWebPubSub(o => o.ServiceEndpoint = new ServiceEndpoint("<connection_string>"))
+ .AddWebPubSubServiceClient<SampleChatHub>();
+ }
+
+ public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
+ {
+ if (env.IsDevelopment())
+ {
+ app.UseDeveloperExceptionPage();
+ }
+
+ app.UseStaticFiles();
+ app.UseRouting();
+
+ app.UseEndpoints(endpoints =>
{
- builder.AddWebPubSubServiceClient("<connection_string>", "chat");
}); }+
+ private sealed class SampleChatHub : WebPubSubHub
+ {
+ }
```
-2. Add a `/negotiate` API to the server inside `app.UseEndpoints` to generate the token
+
+ `AddWebPubSubServiceClient<THub>()` is used to inject the service client `WebPubSubServiceClient<THub>`, with which we can use in negotiation step to generate client connection token and in hub methods to invoke service REST APIs when hub events are triggered.
+
+3. Add a `/negotiate` API to the server inside `app.UseEndpoints` to generate the token.
```csharp app.UseEndpoints(endpoints =>
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
await context.Response.WriteAsync("missing user id"); return; }
- var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
+ var serviceClient = context.RequestServices.GetRequiredService<WebPubSubServiceClient<SampleChatHub>>();
await context.Response.WriteAsync(serviceClient.GetClientAccessUri(userId: id).AbsoluteUri); }); });
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
You can test this API by running `dotnet run --urls http://localhost:8080` and accessing `http://localhost:8080/negotiate?id=<user-id>` and it will give you the full url of the Azure Web PubSub with an access token.
-3. Then update `https://docsupdatetracker.net/index.html` to include the following script to get the token from server and connect to service
+4. Then update `https://docsupdatetracker.net/index.html` to include the following script to get the token from server and connect to service.
```html <html>
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
</html> ```
- If you are using Chrome, you can test it by opening the home page, input your user name. press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
-
+ If you are using Chrome, you can test it by opening the home page, input your user name. Press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
# [JavaScript](#tab/javascript)
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
</html> ```
- If you are using Chrome, you can test it by opening the home page, input your user name. press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
+ If you are using Chrome, you can test it by opening the home page, input your user name. Press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
# [Java](#tab/java)
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
</html> ```
- If you are using Chrome, you can test it by opening the home page, input your user name. press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
+ If you are using Chrome, you can test it by opening the home page, input your user name. Press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
Events are delivered to server in the form of Webhook. Webhook is served and exp
Azure Web PubSub follows [CloudEvents](./reference-cloud-events.md) to describe the event data. # [C#](#tab/csharp)
-For now, you need to implement the event handler by your own in C#, the steps are straight forward following [the protocol spec](./reference-cloud-events.md) and illustrated below.
+Here we're using Web PubSub middleware SDK, there is already an implementation to parse and process CloudEvents schema, so we don't need to deal with these details. Instead, we can focus on the inner business logic in the hub methods.
1. Add event handlers inside `UseEndpoints`. Specify the endpoint path for the events, let's say `/eventhandler`. -
-2. First we'd like to handle the abuse protection OPTIONS requests, we check if the header contains `WebHook-Request-Origin` header, and we return the header `WebHook-Allowed-Origin`. For simplicity for demo purpose, we return `*` to allow all the origins.
```csharp app.UseEndpoints(endpoints => {
- // abuse protection
- endpoints.Map("/eventhandler/{*path}", async context =>
- {
- if (context.Request.Method == "OPTIONS")
- {
- if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
- {
- context.Response.Headers["WebHook-Allowed-Origin"] = "*";
- context.Response.StatusCode = 200;
- return;
- }
- }
- });
+ endpoints.MapWebPubSubHub<SampleChatHub>("/eventhandler/{*path}");
}); ```
-3. Then we'd like to check if the incoming requests are the events we expect. Let's say we now care about the system `connected` event, which should contain the header `ce-type` as `azure.webpubsub.sys.connected`. We add the logic after abuse protection:
+2. Go the `SampleChatHub` we created in previous step. Add a constructor to work with `WebPubSubServiceClient<SampleChatHub>` so we can use to invoke service. And override `OnConnectedAsync()` method to respond when `connected` event is triggered.
```csharp
- app.UseEndpoints(endpoints =>
+ private sealed class SampleChatHub : WebPubSubHub
{
- // abuse protection
- endpoints.Map("/eventhandler/{*path}", async context =>
+ private readonly WebPubSubServiceClient<SampleChatHub> _serviceClient;
+
+ public SampleChatHub(WebPubSubServiceClient<SampleChatHub> serviceClient)
{
- if (context.Request.Method == "OPTIONS")
- {
- if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
- {
- context.Response.Headers["WebHook-Allowed-Origin"] = "*";
- context.Response.StatusCode = 200;
- return;
- }
- }
- else if (context.Request.Method == "POST")
- {
- // get the userId from header
- var userId = context.Request.Headers["ce-userId"];
- if (context.Request.Headers["ce-type"] == "azure.webpubsub.sys.connected")
- {
- // the connected event
- Console.WriteLine($"{userId} connected");
- context.Response.StatusCode = 200;
- return;
- }
- }
- });
- });
+ _serviceClient = serviceClient;
+ }
+
+ public override async Task OnConnectedAsync(ConnectedEventRequest request)
+ {
+ await _serviceClient.SendToAllAsync($"[SYSTEM] {request.ConnectionContext.UserId} joined.");
+ }
+ }
```
-In the above code, we simply print a message to console when a client is connected. You can see we use `context.Request.Headers["ce-userId"]` so we can see the identity of the connected client.
+In the above code, we use the service client to broadcast a notification message to all of whom is joined.
# [JavaScript](#tab/javascript)
Besides system events like `connected` or `disconnected`, client can also send m
# [C#](#tab/csharp)
-The `ce-type` of `message` event is always `azure.webpubsub.user.message`, details see [Event message](./reference-cloud-events.md#message).
+Implement the `OnMessageReceivedAsync()` method in `SampleChatHub`.
-1. Handle message event
+1. Handle message event.
```csharp
- app.UseEndpoints(endpoints =>
+ private sealed class SampleChatHub : WebPubSubHub
{
- // abuse protection
- endpoints.Map("/eventhandler/{*path}", async context =>
+ private readonly WebPubSubServiceClient<SampleChatHub> _serviceClient;
+
+ public SampleChatHub(WebPubSubServiceClient<SampleChatHub> serviceClient)
{
- var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
- if (context.Request.Method == "OPTIONS")
- {
- if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
- {
- context.Response.Headers["WebHook-Allowed-Origin"] = "*";
- context.Response.StatusCode = 200;
- return;
- }
- }
- else if (context.Request.Method == "POST")
- {
- // get the userId from header
- var userId = context.Request.Headers["ce-userId"];
- if (context.Request.Headers["ce-type"] == "azure.webpubsub.sys.connected")
- {
- // the connected event
- Console.WriteLine($"{userId} connected");
- context.Response.StatusCode = 200;
- return;
- }
- else if (context.Request.Headers["ce-type"] == "azure.webpubsub.user.message")
- {
- using var stream = new StreamReader(context.Request.Body);
- await serviceClient.SendToAllAsync($"[{userId}] {await stream.ReadToEndAsync()}");
- context.Response.StatusCode = 200;
- return;
- }
- }
- });
- });
+ _serviceClient = serviceClient;
+ }
+
+ public override async Task OnConnectedAsync(ConnectedEventRequest request)
+ {
+ await _serviceClient.SendToAllAsync($"[SYSTEM] {request.ConnectionContext.UserId} joined.");
+ }
+
+ public override async ValueTask<UserEventResponse> OnMessageReceivedAsync(UserEventRequest request, CancellationToken cancellationToken)
+ {
+ await _serviceClient.SendToAllAsync($"[{request.ConnectionContext.UserId}] {request.Data}");
+
+ return request.CreateResponse($"[SYSTEM] ack."));
+ }
+ }
```
- This event handler uses `WebPubSubServiceClient.SendToAllAsync()` to broadcast the received message to all clients.
+ This event handler uses `WebPubSubServiceClient.SendToAllAsync()` to broadcast the received message to all clients. You can see in the end we returned `UserEventResponse`, which contains a message directly to the caller and make the WebHook request success. If you have extra logic to validate and would like to break this call, you can throw an exception here. The middleware will deliver the exception message to service and service will drop current client connection.
2. Update `https://docsupdatetracker.net/index.html` to add the logic to send message from user to server and display received messages in the page.
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
You can see in the above code we use `WebSocket.send()` to send message and `WebSocket.onmessage` to listen to message from service.
-3. Finally update the `onConnected` handler to broadcast the connected event to all clients so they can see who joined the chat room.
-
- ```csharp
- app.UseEndpoints(endpoints =>
- {
- var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
- // abuse protection
- endpoints.Map("/eventhandler/{*path}", async context =>
- {
- if (context.Request.Method == "OPTIONS")
- {
- if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
- {
- context.Response.Headers["WebHook-Allowed-Origin"] = "*";
- context.Response.StatusCode = 200;
- return;
- }
- }
- else if (context.Request.Method == "POST")
- {
- // get the userId from header
- var userId = context.Request.Headers["ce-userId"];
- if (context.Request.Headers["ce-type"] == "azure.webpubsub.sys.connected")
- {
- // the connected event
- Console.WriteLine($"{userId} connected");
- await serviceClient.SendToAllAsync($"[SYSTEM] {userId} joined.");
- context.Response.StatusCode = 200;
- return;
- }
- }
- });
- });
- ```
- Now run the server using `dotnet run --urls http://localhost:8080` and open multiple browser instances to access http://localhost:8080/https://docsupdatetracker.net/index.html, then you can chat with each other. The complete code sample of this tutorial can be found [here][code-csharp].
backup Quick Backup Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/quick-backup-vm-portal.md
Title: Quickstart - Back up a VM with the Azure portal description: In this Quickstart, learn how to create a Recovery Services vault, enable protection on an Azure VM, and backup the VM, with the Azure portal. Previously updated : 05/12/2020 Last updated : 01/11/2022 ms.devlang: azurecli +++ # Back up a virtual machine in Azure
This quickstart enables backup on an existing Azure VM. If you need to create a
Sign in to the [Azure portal](https://portal.azure.com). ++
+## Apply a backup policy
+
+To apply a backup policy to your Azure VMs, follow these steps:
+
+1. Go to **Backup center** and click **+Backup** from the **Overview** tab.
+
+ ![Screenshot showing the Backup button.](./media/backup-azure-arm-vms-prepare/backup-button.png)
+
+1. Select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then click **Continue**.
+
+ ![Screenshot showing Backup and Backup Goal panes.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
+
+1. Assign a Backup policy.
+
+ - The default policy backs up the VM once a day. The daily backups are retained for _30 days_. Instant recovery snapshots are retained for two days.
+
+ ![Screenshot showing the default backup policy.](./media/backup-azure-arm-vms-prepare/default-policy.png)
+
+ - If you don't want to use the default policy, select **Create New**, and create a custom policy as described in the next procedure.
++ ## Select a VM to back up Create a simple scheduled daily backup to a Recovery Services vault.
-1. In the menu on the left, select **Virtual machines**.
-2. From the list, choose a VM to back up. If you used the sample VM quickstart commands, the VM is named *myVM* in the *myResourceGroup* resource group.
-3. In the **Operations** section, choose **Backup**. The **Enable backup** window opens.
+1. Under **Virtual Machines**, select **Add**.
+
+ ![Screenshot showing to add virtual machines.](./media/backup-azure-arm-vms-prepare/add-virtual-machines.png)
+
+1. The **Select virtual machines** pane will open. Select the VMs you want to back up using the policy. Then select **OK**.
+
+ * The selected VMs are validated.
+ * You can only select VMs in the same region as the vault.
+ * VMs can only be backed up in a single vault.
+
+ ![Screenshot showing the Select virtual machines pane.](./media/backup-azure-arm-vms-prepare/select-vms-to-backup.png)
+
+ >[!NOTE]
+ > All the VMs in the same region and subscription as that of the vault are available to configure backup. When configuring backup, you can browse to the virtual machine name and its resource group, even though you donΓÇÖt have the required permission on those VMs. If your VM is in soft deleted state, then it won't be visible in this list. If you need to re-protect the VM, then you need to wait for the soft delete period to expire or undelete the VM from the soft deleted list. For more information, see [the soft delete for VMs article](soft-delete-virtual-machines.md#soft-delete-for-vms-using-azure-portal).
## Enable backup on a VM A Recovery Services vault is a logical container that stores the backup data for each protected resource, such as Azure VMs. When the backup job for a protected resource runs, it creates a recovery point inside the Recovery Services vault. You can then use one of these recovery points to restore data to a given point in time.
-1. Select **Create new** and provide a name for the new vault, such as *myRecoveryServicesVault*.
-2. If not already selected, choose **Use existing**, then select the resource group of your VM from the drop-down menu.
+To enable VM backup, in **Backup**, select **Enable backup**. This deploys the policy to the vault and to the VMs, and installs the backup extension on the VM agent running on the Azure VM.
- ![Enable VM backup in the Azure portal](./media/quick-backup-vm-portal/enable-backup.png)
+After enabling backup:
- By default, the vault is set for Geo-Redundant storage. To further protect your data, this storage redundancy level ensures that your backup data is replicated to a secondary Azure region that's hundreds of miles away from the primary region.
+- The Backup service installs the backup extension whether or not the VM is running.
+- An initial backup will run in accordance with your backup schedule.
+- When backups run, note that:
+ - A VM that's running has the greatest chance for capturing an application-consistent recovery point.
+ - However, even if the VM is turned off, it's backed up. Such a VM is known as an offline VM. In this case, the recovery point will be crash-consistent.
+- Explicit outbound connectivity isn't required to allow backup of Azure VMs.
- You create and use policies to define when a backup job runs and how long the recovery points are stored. The default protection policy runs a backup job each day and retains recovery points for 30 days. You can use these default policy values to quickly protect your VM.
+### Create a custom policy
-3. To accept the default backup policy values, select **Enable Backup**.
+If you selected to create a new backup policy, fill in the policy settings.
-It takes a few moments to create the Recovery Services vault.
+1. In **Policy name**, specify a meaningful name.
+2. In **Backup schedule**, specify when backups should be taken. You can take daily or weekly backups for Azure VMs.
+3. In **Instant Restore**, specify how long you want to retain snapshots locally for instant restore.
+ * When you restore, backed up VM disks are copied from storage, across the network to the recovery storage location. With instant restore, you can leverage locally stored snapshots taken during a backup job, without waiting for backup data to be transferred to the vault.
+ * You can retain snapshots for instant restore for between one to five days. The default value is two days.
+4. In **Retention range**, specify how long you want to keep your daily or weekly backup points.
+5. In **Retention of monthly backup point** and **Retention of yearly backup point**, specify whether you want to keep a monthly or yearly backup of your daily or weekly backups.
+6. Select **OK** to save the policy.
+ > [!NOTE]
+ > To store the restore point collection (RPC), the Backup service creates a separate resource group (RG). This RG is different than RG of the VM. [Learn more](backup-during-vm-creation.md#azure-backup-resource-group-for-virtual-machines).
-## Start a backup job
+ ![Screenshot showing the new backup policy.](./media/backup-azure-arm-vms-prepare/new-policy.png)
-You can start a backup now rather than wait for the default policy to run the job at the scheduled time. This first backup job creates a full recovery point. Each backup job after this initial backup creates incremental recovery points. Incremental recovery points are storage and time-efficient, as they only transfer changes made since the last backup.
+> [!NOTE]
+ > Azure Backup doesn't support automatic clock adjustment for daylight-saving changes for Azure VM backups. As time changes occur, modify backup policies manually as required.
-1. On the **Backup** window for your VM, select **Backup now**.
+## Start a backup job
- ![Perform immediate VM backup in the Azure portal](./media/quick-backup-vm-portal/backup-now.png)
+The initial backup will run in accordance with the schedule, but you can run it immediately as follows:
-2. To accept the backup retention policy of 30 days, leave the default **Retain Backup Till** date. To start the job, select **Backup**.
+1. Go to **Backup center** and select the **Backup Instances** menu item.
+1. Select **Azure Virtual machines** as the **Datasource type**. Then search for the VM that you have configured for backup.
+1. Right-click the relevant row or select the more icon (…), and then click **Backup Now**.
+1. In **Backup Now**, use the calendar control to select the last day that the recovery point should be retained. Then select **OK**.
+1. Monitor the portal notifications.
+ To monitor the job progress, go to **Backup center** > **Backup Jobs** and filter the list for **In progress** jobs.
+ Depending on the size of your VM, creating the initial backup may take a while.
## Monitor the backup job
-In the **Backup** window for your VM, the status of the backup and number of completed restore points are shown. Once the VM backup job is complete, information on the **Last backup time**, **Latest restore point**, and **Oldest restore point** is shown on the right-hand side of the **Overview** window.
+The Backup job details for each VM backup consist of two phases, the **Snapshot** phase followed by the **Transfer data to vault** phase.
+
+The snapshot phase guarantees the availability of a recovery point stored along with the disks for **Instant Restores** and are available for a maximum of five days depending on the snapshot retention configured by the user. Transfer data to vault creates a recovery point in the vault for long-term retention. Transfer data to vault only starts after the snapshot phase is completed.
+
+ ![Screenshot showing the backup job status.](./media/backup-azure-arm-vms-prepare/backup-job-status.png)
+
+There are two **Sub Tasks** running at the backend, one for front-end backup job that can be checked from the **Backup Job** details pane as given below:
+
+ ![Screenshot showing backup job status sub-tasks.](./media/backup-azure-arm-vms-prepare/backup-job-phase.png)
+
+The **Transfer data to vault** phase can take multiple days to complete depending on the size of the disks, churn per disk and several other factors.
+
+Job status can vary depending on the following scenarios:
+
+**Snapshot** | **Transfer data to vault** | **Job Status**
+ | |
+Completed | In progress | In progress
+Completed | Skipped | Completed
+Completed | Completed | Completed
+Completed | Failed | Completed with warning
+Failed | Failed | Failed
+
+Now with this capability, for the same VM, two backups can run in parallel, but in either phase (snapshot, transfer data to vault) only one sub task can be running. So in scenarios where a backup job in progress resulted in the next dayΓÇÖs backup to fail, it will be avoided with this decoupling functionality. Subsequent days' backups can have the snapshot completed, while **Transfer data to vault** is skipped if an earlier dayΓÇÖs backup job is in progress state.
+The incremental recovery point created in the vault will capture all the churn from the most recent recovery point created in the vault. There's no cost impact on the user.
+
+## Optional steps
+
+### Install the VM agent
+
+Azure Backup backs up Azure VMs by installing an extension to the Azure VM agent running on the machine. If your VM was created from an Azure Marketplace image, the agent is installed and running. If you create a custom VM, or you migrate an on-premises machine, you might need to install the agent manually, as summarized in the table.
+
+**VM** | **Details**
+ |
+**Windows** | 1. [Download and install](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409) the agent MSI file. <br><br> 2. Install with admin permissions on the machine. <br><br> 3. Verify the installation. In *C:\WindowsAzure\Packages* on the VM, right-click **WaAppAgent.exe** > **Properties**. On the **Details** tab, **Product Version** should be 2.6.1198.718 or higher. <br><br> If you're updating the agent, make sure that no backup operations are running, and [reinstall the agent](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409).
+**Linux** | Install by using an RPM or a DEB package from your distribution's package repository. This is the preferred method for installing and upgrading the Azure Linux agent. All the [endorsed distribution providers](../virtual-machines/linux/endorsed-distros.md) integrate the Azure Linux agent package into their images and repositories. The agent is available on [GitHub](https://github.com/Azure/WALinuxAgent), but we don't recommend installing from there. <br><br> If you're updating the agent, make sure no backup operations are running, and update the binaries.</li><ul>
## Clean up deployment
If you're going to continue on to a Backup tutorial that explains how to restore
2. Choose **Stop backup**.
- ![Stop VM backup from the Azure portal](./media/quick-backup-vm-portal/stop-backup.png)
+ ![Screenshot showing to stop VM backup from the Azure portal.](./media/quick-backup-vm-portal/stop-backup.png)
3. Select **Delete Backup Data** from the drop-down menu.
If you're going to continue on to a Backup tutorial that explains how to restore
6. From the list, choose your resource group. If you used the sample VM quickstart commands, the resource group is named *myResourceGroup*. 7. Select **Delete resource group**. To confirm, enter the resource group name, then select **Delete**.
- ![Delete the resource group from the Azure portal](./media/quick-backup-vm-portal/delete-resource-group-from-portal.png)
+ ![Screenshot showing to delete the resource group from the Azure portal.](./media/quick-backup-vm-portal/delete-resource-group-from-portal.png)
## Next steps
backup Tutorial Backup Vm At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-backup-vm-at-scale.md
Title: Tutorial - Back up multiple Azure virtual machines description: In this tutorial, learn how to create a Recovery Services vault, define a backup policy, and simultaneously back up multiple virtual machines. Previously updated : 07/26/2020 Last updated : 01/11/2022 +++ # Use Azure portal to back up multiple virtual machines
When you back up data in Azure, you store that data in an Azure resource called
Sign in to the [Azure portal](https://portal.azure.com/).
-## Create a Recovery Services vault
-The Recovery Services vault contains the backup data, and the backup policy applied to the protected virtual machines. Backing up virtual machines is a local process. You can't back up a virtual machine from one location to a Recovery Services vault in another location. So, for each Azure location that has virtual machines to be backed up, at least one Recovery Services vault must exist in that location.
-
-1. On the left menu, select **All services**.
-
- ![Select All services](./media/tutorial-backup-vm-at-scale/click-all-services.png)
-
-1. In the **All services** dialog box, enter *Recovery Services*. The list of resources filters according to your input. In the list of resources, select **Recovery Services vaults**.
-
- ![Enter and choose Recovery Services vaults](./media/tutorial-backup-vm-at-scale/all-services.png)
-
- The list of Recovery Services vaults in the subscription appears.
-
-1. On the **Recovery Services vaults** dashboard, select **Add**.
-
- ![Add a Recovery Services vault](./media/tutorial-backup-vm-at-scale/add-button-create-vault.png)
-
-1. In the Recovery Services vault menu,
-
- * Type *myRecoveryServicesVault* in **Name**.
- * The current subscription ID appears in **Subscription**. If you have additional subscriptions, you can choose another subscription for the new vault.
- * For **Resource group**, select **Use existing** and choose *myResourceGroup*. If *myResourceGroup* doesn't exist, select **Create new** and type *myResourceGroup*.
- * From the **Location** drop-down menu, choose *West Europe*.
-
- ![Recovery Services vault values](./media/tutorial-backup-vm-at-scale/review-and-create.png)
-
- A Recovery Services vault must be in the same location as the virtual machines being protected. If you have virtual machines in multiple regions, create a Recovery Services vault in each region. This tutorial creates a Recovery Services vault in *West Europe* because that's where *myVM* (the virtual machine created with the quickstart) was created.
-
-1. When you're ready to create the Recovery Services vault, select **Create**.
-
- ![Create the Recovery Services vault](./media/tutorial-backup-vm-at-scale/click-create-button.png)
-
-1. It can take a while to create the Recovery Services vault. Monitor the status notifications in the **Notifications** area at the upper-right corner of the portal. After your vault is created, it's visible in the list of Recovery Services vaults. If you don't see your vault, select **Refresh**.
-
- ![Refresh the list of backup vaults](./media/tutorial-backup-vm-at-scale/refresh-button.png)
When you create a Recovery Services vault, by default the vault has geo-redundant storage. To provide data resiliency, geo-redundant storage replicates the data multiple times across two Azure regions.
When you create a Recovery Services vault, by default the vault has geo-redundan
After creating the Recovery Services vault, the next step is to configure the vault for the type of data, and to set the backup policy. Backup policy is the schedule for how often and when recovery points are taken. Policy also includes the retention range for the recovery points. For this tutorial, let's assume your business is a sports complex with a hotel, stadium, and restaurants and concessions, and you're protecting the data on the virtual machines. The following steps create a backup policy for the financial data.
-1. From the list of Recovery Services vaults, select **myRecoveryServicesVault** to open its dashboard.
+To set a backup policy to your Azure VMs, follow these steps:
- ![Open Scenario menu](./media/tutorial-backup-vm-at-scale/open-vault-from-list.png)
+1. Go to **Backup center** and click **+Backup** from the **Overview** tab.
-1. On the vault dashboard menu, select **Backup** to open the Backup menu.
+ ![Screenshot showing the Backup button.](./media/backup-azure-arm-vms-prepare/backup-button.png)
-1. On the Backup Goal menu, in the **Where is your workload running** drop-down menu, choose *Azure*. From the **What do you want to backup** drop-down, choose *Virtual machine*, and select **Backup**.
+1. Select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then click **Continue**.
- These actions prepare the Recovery Services vault for interacting with a virtual machine. Recovery Services vaults have a default policy that creates a restore point each day, and retains the restore points for 30 days.
+ ![Screenshot showing the Backup and Backup Goal panes.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
- ![Backup goal](./media/tutorial-backup-vm-at-scale/backup-goal.png)
+1. Assign a Backup policy.
-1. To create a new policy, on the Backup policy menu, from the **Choose backup policy** drop-down menu, select *Create a new policy*.
+ - The default policy backs up the VM once a day. The daily backups are retained for 30 days. Instant recovery snapshots are retained for two days.
- ![Create new policy](./media/tutorial-backup-vm-at-scale/create-new-policy.png)
+ ![Screenshot showing the default backup policy.](./media/backup-azure-arm-vms-prepare/default-policy.png)
-1. The **Backup policy** pane will open. Fill out the following details:
- * For **Policy Name** type *Finance*. Enter the following changes for the Backup policy:
- * For **Backup frequency** set the timezone for *Central Time*. Since the sports complex is in Texas, the owner wants the timing to be local. Leave the backup frequency set to Daily at 3:30AM.
- * For **Retention of daily backup point**, set the period to 90 days.
- * For **Retention of weekly backup point**, use the *Monday* restore point and retain it for 52 weeks.
- * For **Retention of monthly backup point**, use the restore point from First Sunday of the month, and retain it for 36 months.
- * Deselect the **Retention of yearly backup point** option. The leader of Finance doesn't want to keep data longer than 36 months.
- * Select **OK** to create the backup policy.
+ - If you don't want to use the default policy, select **Create New**, and create a custom policy as described in the next procedure.
- ![Backup policy settings](./media/tutorial-backup-vm-at-scale/set-new-policy.png)
+1. Under **Virtual Machines**, select **Add**.
- After creating the backup policy, associate the policy with the virtual machines.
+ ![Screenshot showing to add virtual machines.](./media/backup-azure-arm-vms-prepare/add-virtual-machines.png)
-1. Under **Virtual Machines**, select **Add**.
+1. The **Select virtual machines** pane will open. Select the VMs you want to back up using the policy. Then select **OK**.
- ![Add virtual machines](./media/tutorial-backup-vm-at-scale/add-virtual-machines.png)
+ * The selected VMs are validated.
+ * You can only select VMs in the same region as the vault.
+ * VMs can only be backed up in a single vault.
-1. The **Select virtual machines** pane will open. Select *myVM* and select **OK** to deploy the backup policy to the virtual machines.
+ ![Screenshot showing the Select virtual machines pane.](./media/backup-azure-arm-vms-prepare/select-vms-to-backup.png)
- All virtual machines that are in the same location, and aren't already associated with a backup policy, appear. *myVMH1* and *myVMR1* are selected to be associated with the *Finance* policy.
+ >[!NOTE]
+ > All the VMs in the same region and subscription as that of the vault are available to configure backup. When configuring backup, you can browse to the virtual machine name and its resource group, even though you donΓÇÖt have the required permission on those VMs. If your VM is in soft deleted state, then it won't be visible in this list. If you need to re-protect the VM, then you need to wait for the soft delete period to expire or undelete the VM from the soft deleted list. For more information, see [the soft delete for VMs article](soft-delete-virtual-machines.md#soft-delete-for-vms-using-azure-portal).
- ![Choose VMs to protect](./media/tutorial-backup-vm-at-scale/choose-vm-to-protect.png)
+1. In **Backup**, select **Enable backup**. This deploys the policy to the vault and to the VMs, and installs the backup extension on the VM agent running on the Azure VM.
-1. After the virtual machines have been chosen, select **Enable Backup**.
+After enabling backup:
- When the deployment completes, you'll receive a notification that deployment successfully completed.
+* The Backup service installs the backup extension whether or not the VM is running.
+* An initial backup will run in accordance with your backup schedule.
+* When backups run, note that:
+ * A VM that's running has the greatest chance for capturing an application-consistent recovery point.
+ * However, even if the VM is turned off, it's backed up. Such a VM is known as an offline VM. In this case, the recovery point will be crash-consistent.
+* Explicit outbound connectivity isn't required to allow backup of Azure VMs.
## Initial backup You've enabled backup for the Recovery Services vaults, but an initial backup hasn't been created. It's a disaster recovery best practice to trigger the first backup, so that your data is protected.
-To run an on-demand backup job:
-
-1. On the vault dashboard, select **3** under **Backup Items**, to open the Backup Items menu.
-
- ![Backup items](./media/tutorial-backup-vm-at-scale/tutorial-vm-back-up-now.png)
-
- The **Backup Items** menu opens.
-
-1. On the **Backup Items** menu, select **Azure Virtual Machine** to open the list of virtual machines associated with the vault.
-
- ![List of virtual machines](./media/tutorial-backup-vm-at-scale/three-virtual-machines.png)
-
-1. The **Backup Items** list opens.
-
- ![Backup job triggered](./media/tutorial-backup-vm-at-scale/initial-backup-context-menu.png)
-
-1. On the **Backup Items** list, select the ellipses **...** to open the Context menu.
-
-1. On the Context menu, select **Backup now**.
-
- ![Context menu - select Backup now](./media/tutorial-backup-vm-at-scale/context-menu.png)
-
- The Backup Now menu opens.
-
-1. On the Backup Now menu, enter the last day to retain the recovery point, and select **OK**.
-
- ![Set the last day the Backup Now recovery point is retained](./media/tutorial-backup-vm-at-scale/backup-now-short.png)
-
- Deployment notifications let you know the backup job has been triggered, and that you can monitor the progress of the job on the Backup jobs page. Depending on the size of your virtual machine, creating the initial backup may take a while.
-
- When the initial backup job completes, you can see its status in the Backup job menu. The on-demand backup job created the initial restore point for *myVM*. If you want to back up other virtual machines, repeat these steps for each virtual machine.
+The initial backup will run in accordance with the schedule, but you can run it immediately as follows:
- ![Backup Jobs tile](./media/tutorial-backup-vm-at-scale/initial-backup-complete.png)
+1. Go to **Backup center** and select the **Backup Instances** menu item.
+1. Select **Azure Virtual machines** as the **Datasource type**. Then search for the VM that you have configured for backup.
+1. Right-click the relevant row or select the more icon (…), and then click **Backup Now**.
+1. In **Backup Now**, use the calendar control to select the last day that the recovery point should be retained. Then select **OK**.
+1. Monitor the portal notifications.
+ To monitor the job progress, go to **Backup center** > **Backup Jobs** and filter the list for **In progress** jobs.
+ Depending on the size of your VM, creating the initial backup may take a while.
## Clean up resources
If you plan to continue on to work with subsequent tutorials, don't clean up the
1. On the **myRecoveryServicesVault** dashboard, select **3** under **Backup Items** to open the Backup Items menu.
- ![Open Backup items menu](./media/tutorial-backup-vm-at-scale/tutorial-vm-back-up-now.png)
+ ![Screenshot showing to open Backup items menu.](./media/tutorial-backup-vm-at-scale/tutorial-vm-back-up-now.png)
1. On the **Backup Items** menu, select **Azure Virtual Machine** to open the list of virtual machines associated with the vault.
- ![List of virtual machines](./media/tutorial-backup-vm-at-scale/three-virtual-machines.png)
+ ![Screenshot showing the list of virtual machines.](./media/tutorial-backup-vm-at-scale/three-virtual-machines.png)
The **Backup Items** list opens. 1. In the **Backup Items** menu, select the ellipsis to open the Context menu.
- ![In Backup Items menu, open Context menu](./media/tutorial-backup-vm-at-scale/context-menu-to-delete-vm.png)
+ ![Screenshot showing to open Context menu in Backup Items menu.](./media/tutorial-backup-vm-at-scale/context-menu-to-delete-vm.png)
1. On the context menu, select **Stop backup** to open Stop Backup menu.
- ![Stop backup menu](./media/tutorial-backup-vm-at-scale/context-menu-for-delete.png)
+ ![Screenshot showing the Stop backup menu.](./media/tutorial-backup-vm-at-scale/context-menu-for-delete.png)
1. In the **Stop Backup** menu, select the upper drop-down menu and choose **Delete Backup Data**.
If you plan to continue on to work with subsequent tutorials, don't clean up the
1. Once the backup item is verified (a check mark appears), **Stop backup** button is enabled. Select **Stop Backup** to stop the policy and delete the restore points.
- ![Select Stop backup to delete vault](./media/tutorial-backup-vm-at-scale/provide-reason-for-delete.png)
+ ![Screenshot showing to select Stop backup to delete vault.](./media/tutorial-backup-vm-at-scale/provide-reason-for-delete.png)
>[!NOTE] >Deleted items are retained in the soft delete state for 14 days. Only after that period can the vault be deleted. For more information, see [Delete an Azure Backup Recovery Services vault](backup-azure-delete-vault.md). 1. When there are no more items in the vault, select **Delete**.
- ![Select delete](./media/tutorial-backup-vm-at-scale/deleting-the-vault.png)
+ ![Screenshot showing to select delete.](./media/tutorial-backup-vm-at-scale/deleting-the-vault.png)
Once the vault is deleted, you'll return to the list of Recovery Services vaults.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
description: This article lists the Microsoft Security Response Center updates a
documentationcenter: na-+ editor: '' ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 na Previously updated : 11/19/2021- Last updated : 1/11/2022+ # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## December 2021 Guest OS
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 21-12 | [5008218] | Latest Cumulative Update(LCU) | [6.38] | Dec 14, 2021 |
+| Rel 21-12 | [5006671] | IE Cumulative Updates | [2.117], [3.104], [4.97] | Dec 14, 2021 |
+| Rel 21-12 | [5008223] | Latest Cumulative Update(LCU) | [7.6] | Dec 14, 2021 |
+| Rel 21-12 | [5008207] | Latest Cumulative Update(LCU) | [5.62] | Dec 14, 2021 |
+| Rel 21-12 | [4578952] | .NET Framework 3.5 Security and Quality Rollup | [2.117] | Feb 16, 2021 |
+| Rel 21-12 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup | [2.117] | Jun 8, 2021 |
+| Rel 21-12 | [4578953] | .NET Framework 3.5 Security and Quality Rollup | [4.97] | Feb 16, 2021 |
+| Rel 21-12 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup | [4.97] | Feb 16, 2021 |
+| Rel 21-12 | [4578950  ] | .NET Framework 3.5 Security and Quality Rollup | [3.104] | Feb 16, 2021 |
+| Rel 21-12 | [4578954 ] | . NET Framework 4.5.2 Security and Quality Rollup | [3.104] | Feb 16, 2021 |
+| Rel 21-12 | [5004335 ] | . NET Framework 3.5 and 4.7.2 Cumulative Update | [6.38] | Aug 10, 2021 |
+| Rel 21-12 | [5008244] | Monthly Rollup | [2.117] | Sep 14, 2021 |
+| Rel 21-12 | [5008277] | Monthly Rollup | [3.104] | Sep 14, 2021 |
+| Rel 21-12 | [5008263] | Monthly Rollup | [4.97] | Sep 14, 2021 |
+| Rel 21-12 | [5001401 ] | Servicing Stack update | [3.104] | Apr 13, 2021 |
+| Rel 21-12 | [5001403 ] | Servicing Stack update | [4.97] | Apr 13, 2021 |
+| Rel 21-12 OOB | [4578013] | Standalone Security Update | [4.97] | Aug 19, 2020 |
+| Rel 21-12 | [5005698] | Servicing Stack update | [5.62] | Sep 14, 2021 |
+| Rel 21-12 | [5006749] | Servicing Stack update | [2.117] | July 13, 2021 |
+| Rel 21-12 | [5008287 ] | Servicing Stack update | [6.38] | Aug 10, 2021 |
+| Rel 21-12 | [4494175 ] | Microcode | [5.62] | Sep 1, 2020 |
+| Rel 21-12 | [4494174] | Microcode | [6.38] | Sep 1, 2020 |
+
+[5008218]: https://support.microsoft.com/kb/5008218
+[5006671]: https://support.microsoft.com/kb/5006671
+[5008223]: https://support.microsoft.com/kb/5008223
+[5008207]: https://support.microsoft.com/kb/5008207
+[4578952]: https://support.microsoft.com/kb/4578952
+[4578955]: https://support.microsoft.com/kb/4578955
+[4578953]: https://support.microsoft.com/kb/4578953
+[4578956]: https://support.microsoft.com/kb/4578956
+[4578950  ]: https://support.microsoft.com/kb/4578950  
+[4578954 ]: https://support.microsoft.com/kb/4578954 
+[5004335 ]: https://support.microsoft.com/kb/5004335 
+[5008244]: https://support.microsoft.com/kb/5008244
+[5008277]: https://support.microsoft.com/kb/5008277
+[5008263]: https://support.microsoft.com/kb/5008263
+[5001401 ]: https://support.microsoft.com/kb/5001401 
+[5001403 ]: https://support.microsoft.com/kb/5001403 
+[4578013]: https://support.microsoft.com/kb/4578013
+[5005698]: https://support.microsoft.com/kb/5005698
+[5006749]: https://support.microsoft.com/kb/5006749
+[5008287 ]: https://support.microsoft.com/kb/5008287 
+[4494175 ]: https://support.microsoft.com/kb/4494175 
+[4494174]: https://support.microsoft.com/kb/4494174
+[2.117]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.104]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.97]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.62]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.38]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.6]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## November 2021 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
The following tables show the Microsoft Security Response Center (MSRC) updates
## October 2021 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 21-10 | [5006672] | Latest Cumulative Update(LCU) | 6.36 | Oct 12 , 2021 |
-| Rel 21-10 | [5006671] | IE Cumulative Updates | 2.115, 3.102, 4.95 | Oct 12 , 2021 |
+| Rel 21-10 | [5006672] | Latest Cumulative Update(LCU) | [6.36] | Oct 12 , 2021 |
+| Rel 21-10 | [5006671] | IE Cumulative Updates | [2.115], [3.102], [4.95] | Oct 12 , 2021 |
| Rel 21-10 | [5006699] | IE Cumulative Updates | 7.00 | Oct 12 , 2021 |
-| Rel 21-10 | [5006669] | Latest Cumulative Update(LCU) | 5.60 | Oct 12 , 2021 |
-| Rel 21-10 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.115 | Feb 16, 2021 |
-| Rel 21-10 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.115 | Jun 8, 2021 |
-| Rel 21-10 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.95 | Feb 16, 2021 |
-| Rel 21-10 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.95 | Feb 16, 2021 |
-| Rel 21-10 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.102 | Feb 16, 2021 |
-| Rel 21-10 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.102 | Feb 16, 2021 |
-| Rel 21-10 | [5004335] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.36 | Aug 10, 2021 |
-| Rel 21-10 | [5006743] | Monthly Rollup  | 2.115 | Sep 14, 2021 |
-| Rel 21-10 | [5006739] | Monthly Rollup  | 3.102 | Sep 14, 2021 |
-| Rel 21-10 | [5006714] | Monthly Rollup  | 4.95 | Sep 14, 2021 |
-| Rel 21-10 | [5001401] | Servicing Stack update  | 3.102 | Apr 13, 2021 |
-| Rel 21-10 | [5001403] | Servicing Stack update  | 4.95 | Apr 13, 2021 |
-| Rel 21-10 OOB | [4578013] | Standalone Security Update  | 4.95 | Aug 19, 2020 |
-| Rel 21-10 | [5005698] | Servicing Stack update  | 5.60 | Sep 14, 2021 |
-| Rel 21-10 | [5006749] | Servicing Stack update  | 2.115 | July 13, 2021 |
-| Rel 21-10 | [4494175] | Microcode  | 5.60 | Sep 1, 2020 |
-| Rel 21-10 | [4494174] | Microcode  | 6.36 | Sep 1, 2020 |
+| Rel 21-10 | [5006669] | Latest Cumulative Update(LCU) | [5.60] | Oct 12 , 2021 |
+| Rel 21-10 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | [2.115] | Feb 16, 2021 |
+| Rel 21-10 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | [2.115] | Jun 8, 2021 |
+| Rel 21-10 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | [4.95] | Feb 16, 2021 |
+| Rel 21-10 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | [4.95] | Feb 16, 2021 |
+| Rel 21-10 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | [3.102] | Feb 16, 2021 |
+| Rel 21-10 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | [3.102] | Feb 16, 2021 |
+| Rel 21-10 | [5004335] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | [6.36] | Aug 10, 2021 |
+| Rel 21-10 | [5006743] | Monthly Rollup  | [2.115] | Sep 14, 2021 |
+| Rel 21-10 | [5006739] | Monthly Rollup  | [3.102] | Sep 14, 2021 |
+| Rel 21-10 | [5006714] | Monthly Rollup  | [4.95] | Sep 14, 2021 |
+| Rel 21-10 | [5001401] | Servicing Stack update  | [3.102] | Apr 13, 2021 |
+| Rel 21-10 | [5001403] | Servicing Stack update  | [4.95] | Apr 13, 2021 |
+| Rel 21-10 OOB | [4578013] | Standalone Security Update  | [4.95] | Aug 19, 2020 |
+| Rel 21-10 | [5005698] | Servicing Stack update  | [5.60] | Sep 14, 2021 |
+| Rel 21-10 | [5006749] | Servicing Stack update  | [2.115] | July 13, 2021 |
+| Rel 21-10 | [4494175] | Microcode  | [5.60] | Sep 1, 2020 |
+| Rel 21-10 | [4494174] | Microcode  | [6.36] | Sep 1, 2020 |
[5006672]: https://support.microsoft.com/kb/5006672 [5006671]: https://support.microsoft.com/kb/5006671
The following tables show the Microsoft Security Response Center (MSRC) updates
[5006749]: https://support.microsoft.com/kb/5006749 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174-
+[2.115]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.102]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.95]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.60]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.36]: ./cloud-services-guestos-update-matrix.md#family-6-releases
## September 2021 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-update-matrix.md
description: The latest release news and SDK compatibility for Azure Cloud Servi
documentationcenter: na-+ editor: '' ms.assetid: 6306cafe-1153-44c7-8554-623b03d59a34 na Previously updated : 11/19/2021- Last updated : 1/10/2022+ # Azure Guest OS releases and SDK compatibility matrix Provides you with up-to-date information about the latest Azure Guest OS releases for Cloud Services. This information helps you plan your upgrade path before a Guest OS is disabled. If you configure your roles to use *automatic* Guest OS updates as described in [Azure Guest OS Update Settings][Azure Guest OS Update Settings], it is not vital that you read this page.
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **January 10, 2022**
+The December Guest OS has released.
+ ###### **November 19, 2021** The November Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
-| WA-GUEST-OS-7.5_202111-01 | November 19, 2021 | Post 7.52 |
+| WA-GUEST-OS-7.6_202112-01 | January 10, 2022 | Post 7.8 |
+| WA-GUEST-OS-7.5_202111-01 | November 19, 2021 | Post 7.7 |
## Family 6 releases
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.38_202112-01 | January 10, 2022 | Post 6.40 |
| WA-GUEST-OS-6.37_202111-01 | November 19, 2021 | Post 6.39 |
-| WA-GUEST-OS-6.36_202110-01 | November 1, 2021 | Post 6.38 |
+|~~WA-GUEST-OS-6.36_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-6.35_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-6.34_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-6.33_202107-01~~| August 13, 2021 | October 8, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.62_202112-01 | January 10, 2022 | Post 5.64 |
| WA-GUEST-OS-5.61_202111-01 | November 19, 2021 | Post 5.63 |
-| WA-GUEST-OS-5.60_202110-01 | November 1, 2021 | Post 5.62 |
+|~~WA-GUEST-OS-5.60_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-5.59_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-5.58_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-5.57_202107-01~~| August 13, 2021 | October 8, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.97_202112-01 | January 10 , 2022 | Post 4.99 |
| WA-GUEST-OS-4.96_202111-01 | November 19, 2021 | Post 4.98 |
-| WA-GUEST-OS-4.95_202110-01 | November 1, 2021 | Post 4.97 |
+|~~WA-GUEST-OS-4.95_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-4.94_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-4.93_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-4.92_202107-01~~| August 13, 2021 | October 8, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.104_202112-01 | January 10, 2022 | Post 3.106 |
| WA-GUEST-OS-3.103_202111-01 | November 19, 2021 | Post 3.105 |
-| WA-GUEST-OS-3.102_202110-01 | November 1, 2021 | Post 3.104 |
+|~~WA-GUEST-OS-3.102_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-3.101_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-3.100_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-3.99_202107-01~~| August 13, 2021 | October 8, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.117_202112-01 | January 10, 2022 | Post 2.119 |
| WA-GUEST-OS-2.116_202111-01 | November 19, 2021 | Post 2.118 |
-| WA-GUEST-OS-2.115_202110-01 | November 1, 2021 | Post 2.117 |
+|~~WA-GUEST-OS-2.115_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-2.114_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-2.113_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-2.112_202107-01~~| August 13, 2021 | October 8, 2021 |
cognitive-services Overview Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview-multivariate.md
Title: What is the Anomaly Detector Multivariate API?
+ Title: What is Multivariate Anomaly Detector?
description: Overview of new Anomaly Detector preview multivariate APIs.
Last updated 04/01/2021 + keywords: anomaly detection, machine learning, algorithms
-# Multivariate time series Anomaly Detection (preview)
+# What is Multivariate Anomaly Detector? (Public Preview)
The new **multivariate anomaly detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
To run the Notebook, you should get a valid Anomaly Detector API **subscription
## Region support
-The preview of Multivariate Anomaly Detector is currently available in 26 Azure regions: Southeast Asia, Australia East, Canada Central, North Europe, West Europe, East US, East US 2, South Central US, West US, West US 2, West US 3, UK South, Japan East, Brazil South, Central US, Norway East, Korea Central, East Asia, Germany West Central, Central India, UAE North, France Central, North Central US, Switzerland North, South Africa North, Jio India West.
+The preview of Multivariate Anomaly Detector is currently available in 26 Azure regions.
+
+| Geography | Regions |
+| - | - |
+| Africa | South Africa North |
+| Asia Pacific | Southeast Asia, East Asia|
+| Australia | Australia East |
+| Brazil |Brazil South|
+|Canada | Canada Central |
+| Europe | North Europe, West Europe, Switzerland North |
+|France |France Central |
+|Germany| Germany West Central |
+|India| Jio India West, Central India |
+|Japan | Japan East |
+|Korea | Korea Central |
+|Norway | Norway East|
+|United Arab Emirates| UAE North |
+| United Kingdom | UK South |
+| United States | East US, East US 2, South Central US, West US, West US 2, West US 3, Central US, North Central US|
+++ ## Algorithms
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview.md
Title: What is the Anomaly Detector API?
+ Title: What is the Univariate Anomaly Detector?
description: Use the Anomaly Detector API's algorithms to apply anomaly detection on your time series data.
keywords: anomaly detection, machine learning, algorithms
-# What is the Anomaly Detector univariate API?
+# What is Univariate Anomaly Detector?
The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
After signing up:
1. Send a request to the Anomaly Detector API with your data. 1. Process the API response by parsing the returned JSON message. + ## Algorithms * See the following technical blogs for information about the algorithms used:
cognitive-services Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/Configure.md
- Title: Configure Review tool settings - Content Moderator-
-description: Use the Review tool to configure or retrieve your team, tags, connectors, workflows, and credentials for Content Moderator.
------- Previously updated : 10/28/2021-
-#How to set any of the configurations in the review tool.
--
-# Configure the Review tool
---
-The [Review tool](https://contentmoderator.cognitive.microsoft.com) has several important features that you can access through the **Settings** menu on the dashboard.
-
-![Content Moderator Review too settings menu](images/settings-1.png)
-
-## Manage team and subteams
-
-The **Team** tab lets you manage your team and subteams&mdash;groups of users who can be notified when certain [human reviews](../review-api.md#reviews) are started. You can only have one team (which you create when you sign up with the Review tool), but you can create multiple subteams. The team administrator can invite members, set their permissions, and assign them to different subteams.
-
-![Review tool team settings](images/settings-2-team.png)
-
-Subteams are useful for creating escalation teams or teams dedicated to reviewing specific categories of content. For example, you might send adult content to a separate team for further review.
-
-This section explains how to create subteams and quickly assign reviews on the fly. However, you can use [Workflows](workflows.md) to assign reviews based on specific criteria.
-
-### Create a subteam
-
-Go to the **Subteams** section and click **Add Subteam**. Enter your subteam name in the dialog and click **Save**.
-
-![Subteam Name](images/1-Teams-2.PNG)
-
-#### Invite teammates
-
-You cannot assign someone to a subteam if they are not already a member of the default team, so you need to add reviewers to the default team first. Click **Invite** on the **Team** tab.
-
-![Invite users](images/invite-users.png)
-
-#### Assign teammates to subteam
-
-Click the **Add Member** button to assign members from your default team to one or more subteams. You can only add existing users to a subteam. For adding new users who are not in the review tool, invite them by using the "Invite" button on the Team Settings page.
-
-![Assign subteam members](images/1-Teams-3.PNG)
-
-### Assign reviews to subteams
-
-Once you have created your subteams and assigned members, you can start assigning content [reviews](../review-api.md#reviews) to those subteams. This is done from the **Review** tab of the site.
-To assign content to a subteam, click the ellipsis in the upper-right corner, select **Move to**, and select a subteam.
-
-![Assign image review to subteam](images/3-review-image-subteam-1.png)
-
-### Switch between subteams
-
-If you are a member of more than one subteam, you can switch between those subteams to change which content reviews are displayed for you. In the **Review** tab, select the drop-down menu labeled **Default** and select **Choose Subteam**. You can view the content reviews for different subteams, but only those of which your are a member.
-
-![Switch between subteams](images/3-review-image-subteam-2.png)
-
-## Tags
-
-The **Tags** tab lets you define custom moderation tags in addition to the two default moderation tags&mdash;**isadult** (**a**) and **isracy** (**r**). When you create a custom tag, it becomes available in reviews alongside the default tags. You can change which tags show up in reviews by switching their visibility settings.
-
-![Tags view, including the "Is visible" checkboxes](images/tags-4-disable.png)
-
-### Create custom tags
-
-To create a new tag, you must enter a short code, name, and description in the respective fields.
--- **Short code**: Enter a two-letter code for your tag. Example: **cb**-- **Name**: Enter a short and descriptive tag name in lowercase without spaces. Example: **isbullying**.-- **Description**: (optional) Enter a description of the kind of content that your tag targets. Example: **Depictions or instances of cyber bullying**.-
-Click **Add** to add a tag, and click **Save** when you are finished creating tags.
-
-![Review tool create new tag dialog](images/settings-3-tags.png)
-
-### Delete tags
-
-You can delete custom tags by selecting the trash icon next to their entries on the Tags list, but you cannot delete the default tags.
-
-## Connectors
-
-The **Connectors** tab lets you manage your connectors, which are service-specific plugins that can process content in different ways as part of content [workflows](../review-api.md#workflows).
-
-The default connector when you create a workflow is the Content Moderator connector, which can mark content as **adult** or **racy**, find profanity, and so on. However, you can use other connectors, listed here, as long as you have credentials for their respective services (to use the Face connector, for example, you will need to get a [Face](../../face/overview.md) subscription key).
-
-The [Review tool](./human-in-the-loop.md) includes the following connectors:
--- Emotion-- Face-- PhotoDNA Cloud Service-- Language service-
-### Add a connector
-
-To add a connector (and make it available for use in content [workflows](../review-api.md#workflows)), select the appropriate **Connect** button. In the next dialog, enter your subscription key for that service. When you are done, your new connector should appear at the top of the page.
-
-![Content Moderator Connectors Settings](images/settings-4-connectors.png)
-
-## Workflows
-
-The **workflows** tab lets you manage your [workflows](../review-api.md#workflows). Workflows are cloud-based filters for content, and they work with connectors to sort content in different ways and take appropriate actions. Here, you can define, edit, and test your workflows. See [Define and use workflows](Workflows.md) for guidance on how to do this.
-
-![Content Moderator Workflow Settings](images/settings-5-workflows.png)
-
-## Credentials
-
-The **Credentials** tab provides quick access to your Content Moderator subscription key, which you will need to access any of the moderation services from a REST call or client SDK.
-
-![Content Moderator Credentials](images/settings-6-credentials.png)
-
-### Use external credentials for workflows
-
-The [Review tool](https://contentmoderator.cognitive.microsoft.com) generates a free trial key for Azure Content Moderator services when you sign up, but you can also configure it to use an existing key from your Azure account. This is recommended for large-scale scenarios, as free trial keys have strict usage limits ([Pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/content-moderator/)).
-
-If you have created a [Content Moderator resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) in Azure, navigate to it in the Azure portal and select the **Keys** blade. Copy one of your keys.
-
-![Content Moderator keys in the Azure portal](images/credentials-azure-portal-keys.PNG)
-
-In the [Review tool](https://contentmoderator.cognitive.microsoft.com)'s **Credentials** tab, go to the **Workflow Settings** pane, select **Edit**, and paste your key into the **Ocp-Apim-Subscription-Key** field. Now, workflows that call the moderation APIs will use your Azure credential.
-
-> [!NOTE]
-> The other two fields in the **Workflow Settings** pane are for custom term and image lists. See the [Custom terms](../try-terms-list-api.md) or [Custom images](../try-image-list-api.md) guides to learn about these.
-
-### Use your Azure account with the review APIs
-
-To use your Azure key with the review APIs, you need to retrieve your Resource ID. Go to your Content Moderator resource in the Azure portal and select the **Properties** blade. Copy the Resource ID value and paste it into the **Whitelisted Resource Id(s)** field of the Review tool's **Credentials** tab.
-
-![Content Moderator Resource ID in the Azure portal](images/credentials-azure-portal-resourceid.PNG)
-
-If you've entered your subscription key in both places, the trial key that comes with your Review tool account will not be used but will remain available.
-
-## Next steps
-
-Follow the [Review tool quickstart](../quick-start.md) to start using the Review tool in content moderation scenarios.
cognitive-services Review Moderated Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/Review-Moderated-Images.md
- Title: Use content reviews through the Review tool - Content Moderator-
-description: Learn how the Review tool allows human moderators to review images in a web portal.
------- Previously updated : 03/15/2019-
-# Customer intent: use reviews with the Review tool.
--
-# Create human reviews
--
-In this guide, you'll learn how to set up [reviews](../review-api.md#reviews) on the Review tool website. Reviews store and display content for human moderators to assess. Moderators can alter the applied tags and apply their own custom tags as appropriate. When a user completes a review, the results are sent to a specified callback endpoint, and the content is removed from the site.
-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-
-## Image reviews
-
-1. Go to the [Review tool](https://contentmoderator.cognitive.microsoft.com/), select the **Try** tab, and upload some images to review.
-1. Once the uploaded images have finished processing, go to the **Review** tab and select **Image**.
-
- ![Chrome browser showing the review tool with the Review Image option highlighted](images/review-images-1.png)
-
- The images display with any labels that have been assigned by the automatic moderation process. The images you've submitted through the Review tool are not visible to other reviewers.
-
-1. Optionally, move the **Reviews to display** slider (1) to adjust the number of images that are displayed on the screen. Click on the **tagged** or **untagged** buttons (2) to sort the images accordingly. Click on a tag panel (3) to toggle it on or off.
-
- ![Chrome browser showing the Review tool with tagged images for review](images/review-images-2.png)
-
-1. To see more information on an image, click on the ellipsis in the thumbnail and select **View details**. You can assign an image to a subteam with the **Move to** option (see the [teams](./configure.md#manage-team-and-subteams) section to learn more about subteams).
-
- ![An image with the View details option highlighted](images/review-images-3.png)
-
-1. Browse the image moderation information on the details page.
-
- ![An image with moderation details listed in an separate pane](images/review-images-4.png)
-
-1. Once you have reviewed and updated the tag assignments as needed, click **Next** to submit your reviews. After you submit, you have about five seconds to click the **Prev** button to return to the previous screen and review images again. After that, the images are no longer in the Submit queue and the **Prev** button is no longer available.
-
-## Text reviews
-
-Text reviews function similarly to image reviews. Instead of uploading content, you simply write or paste in text (up to 1,024 characters). Then, Content Moderator analyzes the text and applies tags (in addition to other moderation information, such as profanity and personal data). In text reviews, you can toggle the applied tags and/or apply custom tags before submitting the review.
-
-![Screenshot of the review tool showing flagged text in a Chrome browser window](../images/reviewresults_text.png)
-
-## Next steps
-
-In this guide, you learned how to set up and use reviews from the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com). Next, see the [API console guide](../try-review-api-review.md) or the [.NET SDK quickstart](../client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) to learn how to create reviews programmatically.
cognitive-services Workflows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/Workflows.md
- Title: Define and use content workflows through the Review tool - Content Moderator-
-description: You can use the Azure Content Moderator workflow designer to define custom workflows and thresholds based on your content policies.
------- Previously updated : 04/04/2019-
-#Customer intent: use workflows from the Review tool
--
-# Define and use moderation workflows
--
-In this guide, you'll learn how to set up and use [workflows](../review-api.md#workflows) on the [Review tool](https://contentmoderator.cognitive.microsoft.com) website. Workflows are cloud-based customized filters that you can use to handle content more efficiently. Workflows can connect to a variety of services to filter content in different ways and then take the appropriate action. This guide shows you how to use the Content Moderator connector (which is included by default) to filter content and set up human reviews in a typical moderation scenario.
-
-## Create a new workflow
-
-Go to the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) and sign in. On the **Settings** tab, select **Workflows**.
-
-![Workflows setting](images/2-workflows-0.png)
-
-On the next screen, select **Add Workflow**.
-
-![Add a workflow](images/2-workflows-1.png)
-
-### Assign a name and description
-
-Name your workflow, enter a description, and choose whether the workflow will handle images or text.
-
-![Workflow name and description](images/image-workflow-create.PNG)
-
-### Define evaluation criteria
-
-On the next screen, go to the **If** section. In the top dropdown menu, choose **Condition**. This will allow you to configure the condition on which the workflow will take action. If you want to use multiple conditions, choose **Combination** instead.
-
-Next, select a connector. This example uses **Content Moderator**. Depending on the connector you choose, you will get different options for data output. See the [Connectors](./configure.md#connectors) section of the Review tool settings guide to learn how to set up other connectors.
-
-![Select workflow connector](images/image-workflow-connect-to.PNG)
-
-Choose the desired output to use and set the conditions to check it against.
-
-![Define workflow condition](images/image-workflow-condition.PNG)
-
-### Define the action
-
-Go to the **Then** section, where you select an action. The following example creates an image review and assigns a tag. Optionally, you can add an alternative (Else) path and set an action for that as well.
-
-![Define workflow action](images/image-workflow-action.PNG)
-
-### Save the workflow
-
-Note the workflow name; you need the name to start a moderation job with the Workflow API (see below). Finally, save the workflow using the **Save** button at the top of the page.
-
-## Test the workflow
-
-Now that you have defined a custom workflow, test it with sample content. Go to **Workflows** and select the corresponding **Execute Workflow** button.
-
-![Workflow test](images/image-workflow-execute.PNG)
-
-Save this [sample image](https://moderatorsampleimages.blob.core.windows.net/samples/sample2.jpg) to your local drive. Then select **Choose File(s)** and upload the image to the workflow.
-
-![A runner with a quote superimposed on the image](images/sample-text.jpg)
-
-### Track progress
-
-You can view the progress of the workflow in the next popup window.
-
-![Track workflow execution](images/image-workflow-job.PNG)
-
-### Verify workflow action
-
-Go to the **Image** tab under **Review** and verify that there is a newly created image review.
-
-![Review images](images/image-workflow-review.PNG)
-
-## Next steps
-
-In this guide, you learned how to set up and use moderation workflows from the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com). Next, see the [API console guide](../try-review-api-workflow.md) to learn how to create workflows programmatically.
cognitive-services Human In The Loop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/human-in-the-loop.md
- Title: Learn Review tool concepts - Content Moderator-
-description: Learn about the Content Moderator Review tool, a website that coordinates a combined AI and human review moderation effort.
--- Previously updated : 10/28/2021----
-#Conceptual on the Review tool. Includes settings and dashboard
--
-# Content Moderator Review tool
--
-Azure Content Moderator provides services to combine machine learning content moderation with human reviews. The [Review tool](https://contentmoderator.cognitive.microsoft.com) website is a user-friendly front end that gives detailed access to these services.
-
-## What it does
-
-The [Review tool](https://contentmoderator.cognitive.microsoft.com), when used in conjunction with the machine-assisted moderation APIs, allows you to accomplish the following tasks in the content moderation process:
--- Use one set of tools to moderate content in multiple formats (text, image, and video).-- Automate the creation of human [reviews](../review-api.md#reviews) when moderation API results come in.-- Assign or escalate content reviews to multiple review teams, organized by content category or experience level.-- Use default or custom logic filters ([workflows](../review-api.md#workflows)) to sort and track content, without writing any code.-- Use [connectors](./configure.md#connectors) to process content with Microsoft PhotoDNA, Language, and Face services in addition to the Content Moderator APIs.-- Get key performance metrics on your content moderation processes.-
-## Review tool dashboard
-
-On the **Dashboard** tab, you can see key metrics for content reviews done within the tool. See the number of total, complete, and pending reviews for image, text, and video content.
-
-The **Pending reviews** table shows the breakdown of users and subteams that have pending or completed reviews, as well as the SLA time remaining. You can select the items in the table to go to their reviews. The search box above the table lets you filter results by team name, and the **Filter** icon lets you filter by other metrics.
-
-Switching to the **Completed reviews** tab shows the total number of items processed or completed by users and subteams. You can filter this data the same as the pending reviews.
-
-Clicking the text in the upper right corner of the dashboard displays the Daily Personal Metrics, which reports the number of reviews completed for each content type.
-
-> [!div class="mx-imgBorder"]
-> ![The review tool dashboard in a browser](images/0-dashboard.png)
-
-## Review tool credentials
-
-When you sign up with the [Review tool](https://contentmoderator.cognitive.microsoft.com), you'll be prompted to select an Azure region for you account. This is because the [Review tool](https://contentmoderator.cognitive.microsoft.com) generates a free trial key for Azure Content Moderator services. You'll need this key to access any of the services from a REST call or client SDK. You can view your key and API endpoint URL by selecting **Admin** > **Credentials**.
-
-> [!div class="mx-imgBorder"]
-> ![Content Moderator Credentials](images/settings-6-credentials.png)
-
-## Next steps
-
-See [Configure the Review tool](./configure.md) to learn how to access Review tool resources and change settings.
cognitive-services Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/api-reference.md
Title: API reference - Content Moderator
-description: Learn about the various content moderation and review APIs for Content Moderator.
+description: Learn about the content moderation APIs for Content Moderator.
# Content Moderator API reference
-You can get started with Azure Content Moderator APIs in the following ways:
+You can get started with Azure Content Moderator APIs by doing the following:
- In the Azure portal, [subscribe to the Content Moderator API](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator).-- See [Try Content Moderator on the web](quick-start.md) to sign up with the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/).
-## Moderation APIs
-
-You can use the following Content Moderator APIs to set up your post-moderation workflows.
+You can use the following **Content Moderator APIs** to set up your post-moderation workflows.
| Description | Reference | | -- |-|
-| **Image Moderation API**<br /><br />Scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information. <br /><br />Use this information to publish, reject, or review the content in your post-moderation workflow. <br /><br />| [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c "Image Moderation API reference") |
-| **Text Moderation API**<br /><br />Scan text content. Profanity terms and personal data are returned. <br /><br />Use this information to publish, reject, or review the content in your post-moderation workflow.<br /><br /> | [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f "Text Moderation API reference") |
-| **Video Moderation API**<br /><br />Scan videos and detect potential adult and racy content. <br /><br />Use this information to publish, reject, or review the content in your post-moderation workflow.<br /><br /> | [Video Moderation API overview](video-moderation-api.md "Video Moderation API overview") |
+| **Image Moderation API**<br /><br />Scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information. | [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c "Image Moderation API reference") |
+| **Text Moderation API**<br /><br />Scan text content. Profanity terms and personal data are returned. | [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f "Text Moderation API reference") |
+| **Video Moderation API**<br /><br />Scan videos and detect potential adult and racy content. | [Video Moderation API overview](video-moderation-api.md "Video Moderation API overview") |
| **List Management API**<br /><br />Create and manage custom exclusion or inclusion lists of images and text. If enabled, the **Image - Match** and **Text - Screen** operations do fuzzy matching of the submitted content against your custom lists. <br /><br />For efficiency, you can skip the machine learning-based moderation step.<br /><br /> | [List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f675 "List Management API reference") |-
-## Review APIs
-
-The Review APIs have the following components:
-
-| Description | Reference |
-| -- |-|
-| **Jobs**<br /><br /> Initiate scan-and-review moderation workflows for both image and text content. A moderation job scans your content by using the Image Moderation API and the Text Moderation API. Moderation jobs use the defined and default workflows to generate reviews. <br /><br />After a human moderator has reviewed the auto-assigned tags and prediction data and submitted a content moderation decision, the Review API submits all information to your API endpoint.<br /><br /> | [Job reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c5 "Job reference") |
-| **Reviews**<br /><br />Use the Review tool to directly create image or text reviews for human moderators.<br /><br /> After a human moderator has reviewed the auto-assigned tags and prediction data and submitted a content moderation decision, the Review API submits all information to your API endpoint.<br /><br /> | [Review reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4 "Review reference") |
-| **Workflows**<br /><br />Create, update, and get details about the custom workflows that your team creates. You define workflows by using the Review tool. <br /> <br />Workflows typically use Content Moderator, but can also use certain other APIs that are available as connectors in the Review tool.<br /><br /> | [Workflow reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b46b3f9b0711b43c4c59 "Workflow reference") |
cognitive-services Ecommerce Retail Catalog Moderation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation.md
- Title: "Tutorial: Moderate e-commerce product images - Content Moderator"-
-description: This tutorial shows how to set up an application to analyze and classify product images with specified labels (using Azure Computer Vision and Custom Vision). Tag objectionable images to be further reviewed (using Azure Content Moderator).
------- Previously updated : 01/29/2021---
-#Customer intent: As a developer at an e-commerce company, I want to use machine learning to both categorize product images and tag objectionable images for further review by my team.
--
-# Tutorial: Moderate e-commerce product images with Azure Content Moderator
--
-In this tutorial, you'll learn how to use Azure Cognitive Services, including Content Moderator, to classify and moderate product images for an e-commerce scenario. You'll use Computer Vision and Custom Vision to apply tags (labels) to images, and then you'll create a team review, which combines Content Moderator's machine-learning-based technologies with human review teams to provide an intelligent moderation system.
-
-This tutorial shows you how to:
-
-> [!div class="checklist"]
-> * Sign up for Content Moderator and create a review team.
-> * Use Content Moderator's image API to scan for potential adult and racy content.
-> * Use the Computer Vision service to scan for celebrity content (or other Computer-Vision-detectable tags).
-> * Use the Custom Vision service to scan for the presence of flags, toys, and pens (or other custom tags).
-> * Present the combined scan results for human review and final decision making.
-
-The complete sample code is available in the [Samples eCommerce Catalog Moderation](https://github.com/MicrosoftContentModerator/samples-eCommerceCatalogModeration) repository on GitHub.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
--- A Content Moderator subscription key. Follow the instructions in [Create a Cognitive Services account](../cognitive-services-apis-create-account.md) to subscribe to the Content Moderator service and get your key.-- A Computer Vision subscription key (same instructions as above).-- Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/).-- A set of images for each label that the Custom Vision classifier will use (in this case toys, pens, and US flags).-
-## Create a review team
-
-Refer to the [Try Content Moderator on the web](quick-start.md) quickstart for instructions on how to sign up for the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) and create a review team. Take note of the **Team ID** value on the **Credentials** page.
-
-## Create custom moderation tags
-
-Next, create custom tags in the Review tool (see the [Tags](./review-tool-user-guide/configure.md#tags) article if you need help with this process). In this case, we will add the following tags: **celebrity**, **USA**, **flag**, **toy**, and **pen**. Not all of the tags need to be detectable categories in Computer Vision (like **celebrity**); you can add your own custom tags as long as you train the Custom Vision classifier to detect them later on.
-
-![Configure custom tags](images/tutorial-ecommerce-tags2.PNG)
-
-## Create Visual Studio project
-
-1. In Visual Studio, open the New Project dialog. Expand **Installed**, then **Visual C#**, then select **Console app (.NET Framework)**.
-1. Name the application **EcommerceModeration**, then select **OK**.
-1. If you're adding this project to an existing solution, select this project as the single startup project.
-
-This tutorial highlights the code that is central to the project, but it won't cover every line of code. Copy the full contents of _Program.cs_ from the sample project ([Samples eCommerce Catalog Moderation](https://github.com/MicrosoftContentModerator/samples-eCommerceCatalogModeration)) into the _Program.cs_ file of your new project. Then, step through the following sections to learn about how the project works and how to use it yourself.
-
-## Define API keys and endpoints
-
-This tutorial uses three cognitive services; therefore, it requires three corresponding keys and API endpoints. See the following fields in the **Program** class:
-
-[!code-csharp[define API keys and endpoint URIs](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=21-29)]
-
-You'll need to update the `___Key` fields with the values of your subscription keys, and you need to change the `___Uri` fields to the correct endpoint URLs (you'll get the Custom Vision key and endpoint later on). You can find these values in the **Quick start** tabs of each Azure resource. Fill in the `YOURTEAMID` part of the `ReviewUri` field with the ID of the review team you created earlier. You'll fill in the final part of the `CustomVisionUri` field later on.
--
-## Primary method calls
-
-See the following code in the **Main** method, which loops through a list of image URLs. It analyzes each image with the three different services, records the applied tags in the **ReviewTags** array, and then creates a review for human moderators by sending the images to the Content Moderator Review Tool. You will explore these methods in the following sections. If you wish, you can control which images are sent to review, using the **ReviewTags** array in a conditional statement to check which tags were applied.
-
-[!code-csharp[Main: evaluate each image and create review](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=53-70)]
-
-## EvaluateAdultRacy method
-
-See the **EvaluateAdultRacy** method in the **Program** class. This method takes an image URL and an array of key-value pairs as parameters. It calls the Content Moderator Image REST API to get the Adult and Racy scores of the image. If the score for either is greater than 0.4 (the range is between 0 and 1), it sets the corresponding value in the **ReviewTags** array to **True**.
-
-[!code-csharp[define EvaluateAdultRacy method](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=73-113)]
-
-## EvaluateComputerVisionTags method
-
-The next method takes an image URL and your Computer Vision subscription information and analyzes the image for the presence of celebrities. If one or more celebrities are found, it sets the corresponding value in the **ReviewTags** array to **True**.
-
-[!code-csharp[define EvaluateCustomVisionTags method](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=115-146)]
-
-## EvaluateCustomVisionTags method
-
-Next, see the **EvaluateCustomVisionTags** method, which classifies the actual products&mdash;in this case flags, toys, and pens. Follow the instructions in the [How to build a classifier](../custom-vision-service/getting-started-build-a-classifier.md) guide to build your own custom image classifier and detect flags, toys, and pens (or whatever you chose as your custom tags) in images. You can use the images in the **sample-images** folder of the [GitHub repo](https://github.com/MicrosoftContentModerator/samples-eCommerceCatalogModeration) to quickly train some of the categories in this example.
-
-![Custom Vision web page with training images of pens, toys, and flags](images/tutorial-ecommerce-custom-vision.PNG)
-
-Once you've trained your classifier, get the prediction key and prediction endpoint URL (see [Get the URL and prediction key](../custom-vision-service/use-prediction-api.md#get-the-url-and-prediction-key) if you need help with retrieving them), and assign these values to your `CustomVisionKey` and `CustomVisionUri` fields, respectively. The method uses these values to query the classifier. If the classifier finds one or more of the custom tags in the image, this method sets the corresponding value(s) in the **ReviewTags** array to **True**.
-
-[!code-csharp[define EvaluateCustomVisionTags method](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=148-171)]
-
-## Create reviews for Review tool
-
-In the previous sections, you explored how the app scans incoming images for adult and racy content (Content Moderator), celebrities (Computer Vision), and various other objects (Custom Vision). Next, see the **CreateReview** method, which uploads the images with all of their applied tags (passed in as _Metadata_) to the Content Moderator Review Tool.
-
-[!code-csharp[define CreateReview method](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=173-196)]
-
-The images will show up in the Review tab of the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/).
-
-![Screenshot of the Content Moderator Review tool with several images and their highlighted tags](images/tutorial-ecommerce-content-moderator.PNG)
-
-## Submit a list of test images
-
-As you can see in the **Main** method, this program looks for a "C:Test" directory with a _Urls.txt_ file that contains a list of image Urls. Create this file and directory, or change the path to point to your text file. Then populate this file with the URLs of images you'd like to test.
-
-[!code-csharp[Main: set up test directory, read lines](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=38-51)]
-
-## Run the program
-
-If you've followed all of the above steps, the program should process each image (querying all three services for their relevant tags) and then upload the images with tag information to the Content Moderator Review Tool.
-
-## Next steps
-
-In this tutorial, you set up a program to analyze product images, tag them by product type, and allow a review team to make informed decisions about content moderation. Next, learn more about the details of image moderation.
-
-> [!div class="nextstepaction"]
-> [Review moderated images](./review-tool-user-guide/review-moderated-images.md)
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/encrypt-data-at-rest.md
Customer-managed keys are available in all Azure regions.
[!INCLUDE [cognitive-services-cmk](../includes/configure-customer-managed-keys.md)]
-## Enable data encryption for your Content Moderator Team
-
-To enable data encryption for your Content Moderator Review Team, see the [Quickstart: Try Content Moderator on the web](quick-start.md#create-a-review-team).
-
-> [!NOTE]
-> You'll need to provide a _Resource ID_ with the Content Moderator E0 pricing tier.
- ## Next steps * For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
cognitive-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/export-delete-data.md
[!INCLUDE [deprecation notice](includes/tool-deprecation.md)]
-Content Moderator collects user data to operate the service, but customers have full control to view, export, and delete their data using the [Review tool](https://contentmoderator.cognitive.microsoft.com/) and the [Moderation and Review APIs](./api-reference.md).
+Content Moderator collects user data to operate the service, but customers have full control to view, export, and delete their data using the [Moderation APIs](./api-reference.md).
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-intro-sentence.md)]
For more information on how to export and delete user data in Content Moderator,
| Data | Export Operation | Delete Operation | | - | - | - |
-| Account Info (Subscription Keys) | N/A | Delete using the Azure portal (Azure Subscriptions). Or use the **Delete Team** button in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page. |
+| Account Info (Subscription Keys) | N/A | Delete using the Azure portal (Azure Subscriptions). |
| Images for custom matching | Call the [Get image IDs API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f676). Images are stored in a one-way proprietary hash format, and there is no way to extract the actual images. | Call the [Delete all Images API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f686). Or delete the Content Moderator resource using the Azure portal. |
-| Terms for custom matching | Cal the [Get all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67e) | Call the [Delete all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67d). Or delete the Content Moderator resource using the Azure portal. |
-| Tags | N/A | Use the **Delete** icon available for each tag in the Review UI Tag settings page. Or use the **Delete Team** button in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page. |
-| Reviews | Call the [Get review API](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c2) | Use the **Delete Team** button in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page.
-| Users | N/A | Use the **Delete** icon available for each user in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page. Or use the **Delete Team** button in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page. |
+| Terms for custom matching | Cal the [Get all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67e) | Call the [Delete all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67d). Or delete the Content Moderator resource using the Azure portal. |
cognitive-services Facebook Post Moderation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/facebook-post-moderation.md
- Title: "Tutorial: Moderate Facebook content - Content Moderator"-
-description: In this tutorial, you will learn how to use machine-learning-based Content Moderator to help moderate Facebook posts and comments.
------- Previously updated : 01/29/2021-
-#Customer intent: As the moderator of a Facebook page, I want to use Azure's machine learning technology to automate and streamline the process of post moderation.
--
-# Tutorial: Moderate Facebook posts and commands with Azure Content Moderator
--
-In this tutorial, you will learn how to use Azure Content Moderator to help moderate the posts and comments on a Facebook page. Facebook will send the content posted by visitors to the Content Moderator service. Then your Content Moderator workflows will either publish the content or create reviews within the Review tool, depending on the content scores and thresholds. See the [Build 2017 demo video](https://channel9.msdn.com/Events/Build/2017/T6033) for a working example of this scenario.
-
-> [!IMPORTANT]
-> In 2018, Facebook implemented a more strict vetting policy for Facebook Apps. You will not be able to complete the steps of this tutorial if your app has not been reviewed and approved by the Facebook review team.
-
-This tutorial shows you how to:
-
-> [!div class="checklist"]
-> * Create a Content Moderator team.
-> * Create Azure Functions that listen for HTTP events from Content Moderator and Facebook.
-> * Link a Facebook page to Content Moderator using a Facebook application.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-This diagram illustrates each component of this scenario:
-
-![Diagram of Content Moderator receiving information from Facebook through "FBListener" and sending information through "CMListener"](images/tutorial-facebook-moderation.png)
-
-## Prerequisites
--- A Content Moderator subscription key. Follow the instructions in [Create a Cognitive Services account](../cognitive-services-apis-create-account.md) to subscribe to the Content Moderator service and get your key.-- A [Facebook account](https://www.facebook.com/).-
-## Create a review team
-
-Refer to the [Try Content Moderator on the web](quick-start.md) quickstart for instructions on how to sign up for the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) and create a review team. Take note of the **Team ID** value on the **Credentials** page.
-
-## Configure image moderation workflow
-
-Refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide to create a custom image workflow. Content Moderator will use this workflow to automatically check images on Facebook and send some to the Review tool. Take note of the workflow **name**.
-
-## Configure text moderation workflow
-
-Again, refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide; this time, create a custom text workflow. Content Moderator will use this workflow to automatically check text content. Take note of the workflow **name**.
-
-![Configure Text Workflow](images/text-workflow-configure.PNG)
-
-Test your workflow using the **Execute Workflow** button.
-
-![Test Text Workflow](images/text-workflow-test.PNG)
-
-## Create Azure Functions
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps:
-
-1. Create an Azure Function App as shown on the [Azure Functions](../../azure-functions/functions-create-function-app-portal.md) page.
-1. Go to the newly created Function App.
-1. Within the App, go to the **Platform features** tab and select **Configuration**. In the **Application settings** section of the next page, select **New application setting** to add the following key/value pairs:
-
- | App Setting name | value |
- | -- |-|
- | `cm:TeamId` | Your Content Moderator TeamId |
- | `cm:SubscriptionKey` | Your Content Moderator subscription key - See [Credentials](./review-tool-user-guide/configure.md#credentials) |
- | `cm:Region` | Your Content Moderator region name, without the spaces. You can find this name in the **Location** field of the **Overview** tab of your Azure resource.|
- | `cm:ImageWorkflow` | Name of the workflow to run on Images |
- | `cm:TextWorkflow` | Name of the workflow to run on Text |
- | `cm:CallbackEndpoint` | Url for the CMListener Function App that you will create later in this guide |
- | `fb:VerificationToken` | A secret token that you create, used to subscribe to the Facebook feed events |
- | `fb:PageAccessToken` | The Facebook graph api access token does not expire and allows the function Hide/Delete posts on your behalf. You will get this token at a later step. |
-
- Click the **Save** button at the top of the page.
-
-1. Go back to the **Platform features** tab. Use the **+** button on the left pane to bring up the **New function** pane. The function you are about to create will receive events from Facebook.
-
- ![Azure Functions pane with the Add Function button highlighted.](images/new-function.png)
-
- 1. Click on the tile that says **Http trigger**.
- 1. Enter the name **FBListener**. The **Authorization Level** field should be set to **Function**.
- 1. Click **Create**.
- 1. Replace the contents of the **run.csx** with the contents from **FbListener/run.csx**
-
- [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/FbListener/run.csx?range=1-154)]
-
-1. Create a new **Http trigger** function named **CMListener**. This function receives events from Content Moderator. Replace the contents of the **run.csx** with the contents from **CMListener/run.csx**
-
- [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/CmListener/run.csx?range=1-110)]
---
-## Configure the Facebook page and App
-
-1. Create a Facebook App.
-
- ![facebook developer page](images/facebook-developer-app.png)
-
- 1. Navigate to the [Facebook developer site](https://developers.facebook.com/)
- 1. Go to **My Apps**.
- 1. Add a New App.
- 1. Provide a name
- 1. Select **Webhooks -> Set Up**
- 1. Select **Page** in the dropdown menu and select **Subscribe to this object**
- 1. Provide the **FBListener Url** as the Callback URL and the **Verify Token** you configured under the **Function App Settings**
- 1. Once subscribed, scroll down to feed and select **subscribe**.
- 1. Select the **Test** button of the **feed** row to send a test message to your FBListener Azure Function, then hit the **Send to My Server** button. You should see the request being received on your FBListener.
-
-1. Create a Facebook Page.
-
- > [!IMPORTANT]
- > In 2018, Facebook implemented a more strict vetting of Facebook apps. You will not be able to execute sections 2, 3 and 4 if your app has not been reviewed and approved by the Facebook review team.
-
- 1. Navigate to [Facebook](https://www.facebook.com/pages) and create a **new Facebook Page**.
- 1. Allow the Facebook App to access this page by following these steps:
- 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
- 1. Select **Application**.
- 1. Select **Page Access Token**, Send a **Get** request.
- 1. Select the **Page ID** in the response.
- 1. Now append the **/subscribed_apps** to the URL and Send a **Get** (empty response) request.
- 1. Submit a **Post** request. You get the response as **success: true**.
-
-3. Create a non-expiring Graph API access token.
-
- 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
- 2. Select the **Application** option.
- 3. Select the **Get User Access Token** option.
- 4. Under the **Select Permissions**, select **manage_pages** and **publish_pages** options.
- 5. We will use the **access token** (Short Lived Token) in the next step.
-
-4. We use Postman for the next few steps.
-
- 1. Open **Postman** (or get it [here](https://www.getpostman.com/)).
- 2. Import these two files:
- 1. [Postman Collection](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/Facebook%20Permanant%20Page%20Access%20Token.postman_collection.json)
- 2. [Postman Environment](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/FB%20Page%20Access%20Token%20Environment.postman_environment.json)
- 3. Update these environment variables:
-
- | Key | Value |
- | -- |-|
- | appId | Insert your Facebook App Identifier here |
- | appSecret | Insert your Facebook App's secret here |
- | short_lived_token | Insert the short lived user access token you generated in the previous step |
- 4. Now run the 3 APIs listed in the collection:
- 1. Select **Generate Long-Lived Access Token** and click **Send**.
- 2. Select **Get User ID** and click **Send**.
- 3. Select **Get Permanent Page Access Token** and click **Send**.
- 5. Copy the **access_token** value from the response and assign it to the App setting, **fb:PageAccessToken**.
-
-The solution sends all images and text posted on your Facebook page to Content Moderator. Then the workflows that you configured earlier are invoked. The content that does not pass your criteria defined in the workflows gets passed to reviews within the review tool. The rest of the content gets published automatically.
-
-## Next steps
-
-In this tutorial, you set up a program to analyze product images, tag them by product type, and allow a review team to make informed decisions about content moderation. Next, learn more about the details of image moderation.
-
-> [!div class="nextstepaction"]
-> [Image moderation](./image-moderation-api.md)
cognitive-services Image Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/image-moderation-api.md
Title: Image Moderation - Content Moderator
-description: Use Content Moderator's machine-assisted image moderation and human-in-the-loop Review tool to moderate images for adult and racy content.
+description: Use Content Moderator's machine-assisted image moderation to moderate images for adult and racy content.
# Learn image moderation concepts
-Use Content Moderator's machine-assisted image moderation and [Review tool](Review-Tool-User-Guide/human-in-the-loop.md) to moderate images for adult and racy content. Scan images for text content and extract that text, and detect faces. You can match images against custom lists, and take further action.
+Use Content Moderator's machine-assisted image moderation to moderate images for adult and racy content. Scan images for text content and extract that text, and detect faces. You can match images against custom lists, and take further action.
## Evaluating for adult and racy content
Example extract:
} ```
-## Review tool
-
-For more nuanced cases, use the Content Moderator [Review tool](Review-Tool-User-Guide/human-in-the-loop.md) and its API to surface the moderation results and content in the review for your human moderators. They review the machine-assigned tags and confirm their final decisions.
-
-![Image review for human moderators](images/moderation-reviews-quickstart-dotnet.PNG)
## Next steps
-Test drive the [Image Moderation API console](try-image-api.md) and use the REST API code samples. Also see [Reviews, workflows, and jobs](./review-api.md) to learn how to set up human reviews.
+Test drive the [Image Moderation API console](try-image-api.md) and use the REST API code samples.
cognitive-services Moderation Jobs Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/moderation-jobs-quickstart-dotnet.md
- Title: "Use moderation jobs using .NET - Content Moderator"-
-description: Use the Content Moderator .NET SDK to initiate end-to-end content moderation jobs for image or text content in Azure Content Moderator.
------- Previously updated : 10/24/2019--
-#The jobs how-to for .NET SDK
--
-# Define and use moderation jobs (.NET)
--
-A moderation job serves as a kind of wrapper for the functionality of content moderation, workflows and reviews. This guide provides information and code samples to help you get started using the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to:
--- Start a moderation job to scan and create reviews for human moderators-- Get the status of the pending review-- Track and get the final status of the review-- Submit the review results to the callback URL-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-
-## Ensure your API key can call the review API for review creation
-
-After completing the previous steps, you may end up with two Content Moderator keys if you started from the Azure portal.
-
-If you plan to use the Azure-provided API key in your SDK sample, follow the steps mentioned in the [Using Azure key with the review API](./review-tool-user-guide/configure.md#use-your-azure-account-with-the-review-apis) section to allow your application to call the review API and create reviews.
-
-If you use the free trial key generated by the review tool, your review tool account already knows about the key and therefore, no additional steps are required.
-
-## Define a custom moderation workflow
-
-A moderation job scans your content using the APIs and uses a **workflow** to determine whether to create reviews or not.
-While the review tool contains a default workflow, let's [define a custom workflow](Review-Tool-User-Guide/Workflows.md) for this quickstart.
-
-You use the name of the workflow in your code that starts the moderation job.
-
-## Create your Visual Studio project
-
-1. Add a new **Console app (.NET Framework)** project to your solution.
-
- In the sample code, name the project **CreateReviews**.
-
-1. Select this project as the single startup project for the solution.
-
-### Install required packages
-
-Install the following NuGet packages:
--- Microsoft.Azure.CognitiveServices.ContentModerator-- Microsoft.Rest.ClientRuntime-- Newtonsoft.Json-
-### Update the program's using statements
-
-Modify the program's using statements.
-
-```csharp
-using Microsoft.Azure.CognitiveServices.ContentModerator;
-using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
-using Newtonsoft.Json;
-using System;
-using System.Collections.Generic;
-using System.IO;
-using System.Threading;
-```
-
-### Create the Content Moderator client
-
-Add the following code to create a Content Moderator client for your subscription.
-
-> [!IMPORTANT]
-> Update the **AzureEndpoint** and **CMSubscriptionKey** fields with
-> the values of your endpoint URL and subscription key.
-
-```csharp
-/// <summary>
-/// Wraps the creation and configuration of a Content Moderator client.
-/// </summary>
-/// <remarks>This class library contains insecure code. If you adapt this
-/// code for use in production, use a secure method of storing and using
-/// your Content Moderator subscription key.</remarks>
-public static class Clients
-{
- /// <summary>
- /// The base URL fragment for Content Moderator calls.
- /// </summary>
- private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
-
- /// <summary>
- /// Your Content Moderator subscription key.
- /// </summary>
- private static readonly string CMSubscriptionKey = "YOUR API KEY";
-
- /// <summary>
- /// Returns a new Content Moderator client for your subscription.
- /// </summary>
- /// <returns>The new client.</returns>
- /// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
- /// When you have finished using the client,
- /// you should dispose of it either directly or indirectly. </remarks>
- public static ContentModeratorClient NewClient()
- {
- // Create and initialize an instance of the Content Moderator API wrapper.
- ContentModeratorClient client = new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey));
-
- client.Endpoint = AzureEndpoint;
- return client;
- }
-}
-```
-
-### Initialize application-specific settings
-
-Add the following constants and static fields to the **Program** class in Program.cs.
-
-> [!NOTE]
-> You set the TeamName constant to the name you used when you
-> created your Content Moderator subscription. You retrieve TeamName from the
-> Content Moderator web site.
-> Once you log in, select **Credentials** from the **Settings** (gear) menu.
->
-> Your team name is the value of the **Id** field in the **API** section.
-
-```csharp
-/// <summary>
-/// The moderation job will use this workflow that you defined earlier.
-/// See the quickstart article to learn how to setup custom workflows.
-/// </summary>
-private const string WorkflowName = "OCR";
-
-/// <summary>
-/// The name of the team to assign the job to.
-/// </summary>
-/// <remarks>This must be the team name you used to create your
-/// Content Moderator account. You can retrieve your team name from
-/// the Content Moderator web site. Your team name is the Id associated
-/// with your subscription.</remarks>
-private const string TeamName = "***";
-
-/// <summary>
-/// The URL of the image to create a review job for.
-/// </summary>
-private const string ImageUrl =
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png";
-
-/// <summary>
-/// The name of the log file to create.
-/// </summary>
-/// <remarks>Relative paths are relative to the execution directory.</remarks>
-private const string OutputFile = "OutputLog.txt";
-
-/// <summary>
-/// The number of seconds to delay after a review has finished before
-/// getting the review results from the server.
-/// </summary>
-private const int latencyDelay = 45;
-
-/// <summary>
-/// The callback endpoint for completed reviews.
-/// </summary>
-/// <remarks>Reviews show up for reviewers on your team.
-/// As reviewers complete reviews, results are sent to the
-/// callback endpoint using an HTTP POST request.</remarks>
-private const string CallbackEndpoint = "";
-```
-
-## Add code to auto-moderate, create a review, and get the job details
-
-> [!Note]
-> In practice, you set the callback URL **CallbackEndpoint** to the URL
-> that receives the results of the manual review (via an HTTP POST request).
-
-Start by adding the following code to the **Main** method.
-
-```csharp
-using (TextWriter writer = new StreamWriter(OutputFile, false))
-{
- using (var client = Clients.NewClient())
- {
- writer.WriteLine("Create review job for an image.");
- var content = new Content(ImageUrl);
-
- // The WorkflowName contains the name of the workflow defined in the online review tool.
- // See the quickstart article to learn more.
- var jobResult = client.Reviews.CreateJobWithHttpMessagesAsync(
- TeamName, "image", "contentID", WorkflowName, "application/json", content, CallbackEndpoint);
-
- // Record the job ID.
- var jobId = jobResult.Result.Body.JobIdProperty;
-
- // Log just the response body from the returned task.
- writer.WriteLine(JsonConvert.SerializeObject(
- jobResult.Result.Body, Formatting.Indented));
-
- Thread.Sleep(2000);
- writer.WriteLine();
-
- writer.WriteLine("Get review job status.");
- var jobDetails = client.Reviews.GetJobDetailsWithHttpMessagesAsync(
- TeamName, jobId);
-
- // Log just the response body from the returned task.
- writer.WriteLine(JsonConvert.SerializeObject(
- jobDetails.Result.Body, Formatting.Indented));
-
- Console.WriteLine();
- Console.WriteLine("Perform manual reviews on the Content Moderator site.");
- Console.WriteLine("Then, press any key to continue.");
- Console.ReadKey();
-
- Console.WriteLine();
- Console.WriteLine($"Waiting {latencyDelay} seconds for results to propagate.");
- Thread.Sleep(latencyDelay * 1000);
-
- writer.WriteLine("Get review details.");
- jobDetails = client.Reviews.GetJobDetailsWithHttpMessagesAsync(
- TeamName, jobId);
-
- // Log just the response body from the returned task.
- writer.WriteLine(JsonConvert.SerializeObject(
- jobDetails.Result.Body, Formatting.Indented));
- }
- writer.Flush();
- writer.Close();
-}
-```
-
-> [!NOTE]
-> Your Content Moderator service key has a requests per second (RPS)
-> rate limit. If you exceed the limit, the SDK throws an exception with a 429 error code.
->
-> A free tier key has a one RPS rate limit.
-
-## Run the program and review the output
-
-You see the following sample output in the console:
-
-```console
-Perform manual reviews on the Content Moderator site.
-Then, press any key to continue.
-```
-
-Sign into the Content Moderator review tool to see the pending image review.
-
-Use the **Next** button to submit.
-
-![Image review for human moderators](images/ocr-sample-image.PNG)
-
-## See the sample output in the log file
-
-> [!NOTE]
-> In your output file, the strings **Teamname**, **ContentId**, **CallBackEndpoint**, and **WorkflowId**
-> reflect the values you used earlier.
-
-```json
-Create moderation job for an image.
-{
- "JobId": "2018014caceddebfe9446fab29056fd8d31ffe"
-}
-
-Get review details.
-{
- "Id": "2018014caceddebfe9446fab29056fd8d31ffe",
- "TeamName": "some team name",
- "Status": "InProgress",
- "WorkflowId": "OCR",
- "Type": "Image",
- "CallBackEndpoint": "",
- "ReviewId": "",
- "ResultMetaData": [],
- "JobExecutionReport": [
- {
- "Ts": "2018-01-07T00:38:26.7714671",
- "Msg": "Successfully got hasText response from Moderator"
- },
- {
- "Ts": "2018-01-07T00:38:26.4181346",
- "Msg": "Getting hasText from Moderator"
- },
- {
- "Ts": "2018-01-07T00:38:25.5122828",
- "Msg": "Starting Execution - Try 1"
- }
- ]
-}
-```
-
-## Your callback Url if provided, receives this response
-
-You see a response like the following example:
-
-> [!NOTE]
-> In your callback response, the strings **ContentId** and **WorkflowId**
-> reflect the values you used earlier.
-
-```json
-{
- "JobId": "2018014caceddebfe9446fab29056fd8d31ffe",
- "ReviewId": "201801i28fc0f7cbf424447846e509af853ea54",
- "WorkFlowId": "OCR",
- "Status": "Complete",
- "ContentType": "Image",
- "CallBackType": "Job",
- "ContentId": "contentID",
- "Metadata": {
- "hastext": "True",
- "ocrtext": "IF WE DID \r\nALL \r\nTHE THINGS \r\nWE ARE \r\nCAPABLE \r\nOF DOING, \r\nWE WOULD \r\nLITERALLY \r\nASTOUND \r\nOURSELVE \r\n",
- "imagename": "contentID"
- }
-}
-```
-
-## Next steps
-
-Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET, and get started on your integration.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/overview.md
keywords: content moderator, azure content moderator, online moderator, content
[!INCLUDE [deprecation notice](includes/tool-deprecation.md)]
-Azure Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. It includes the AI-powered content moderation service which scans text, image, and videos and applies content flags automatically, as well as the Review tool, an online moderator environment for a team of human reviewers.
+Azure Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. It includes the AI-powered content moderation service which scans text, image, and videos and applies content flags automatically.
You may want to build content filtering software into your app to comply with regulations or maintain the intended environment for your users.
The following are a few scenarios in which a software developer or team would re
## What it includes
-The Content Moderator service consists of several web service APIs available through both REST calls and a .NET SDK. It also includes the Review tool, which allows human reviewers to aid the service and improve or fine-tune its moderation function.
+The Content Moderator service consists of several web service APIs available through both REST calls and a .NET SDK.
## Moderation APIs
The following table describes the different types of moderation APIs.
|[**Custom image lists**](try-image-list-api.md)| Scans images against a custom list of images. Use custom image lists to filter out instances of commonly recurring content that you don't want to classify again.| |[**Video moderation**](video-moderation-api.md)| Scans videos for adult or racy content and returns time markers for said content.|
-## Review APIs
-
-The Review APIs let you integrate your moderation pipeline with human reviewers. Use the [Jobs](review-api.md#jobs), [Reviews](review-api.md#reviews), and [Workflow](review-api.md#workflows) operations to create and automate human-in-the-loop workflows with the [Review tool](#review-tool) (below).
-
-> [!NOTE]
-> The Workflow API is not yet available in the .NET SDK but can be used with the REST endpoint.
-
-![block diagram for Content Moderator review APIs](images/content-moderator-rev-api.png)
-
-## Review tool
-
-The Content Moderator service also includes the web-based [Review tool](Review-Tool-User-Guide/human-in-the-loop.md), which hosts the content reviews for human moderators to process. The human input doesn't train the service, but the combined work of the service and human review teams allows developers to strike the right balance between efficiency and accuracy. The Review tool also provides a user-friendly front end for several Content Moderator resources.
-
-![Content Moderator Review tool homepage](images/homepage.PNG)
- ## Data privacy and security As with all of the Cognitive Services, developers using the Content Moderator service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
cognitive-services Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/quick-start.md
- Title: "Quickstart: Try Content Moderator on the web"-
-description: In this quickstart, you'll use the online Content Moderator Review tool to test out the basic functionality of Content Moderator without having to write any code.
------ Previously updated : 09/28/2021--
-keywords: content moderator, content moderation
--
-# Quickstart: Try Content Moderator on the web
--
-In this quickstart, you'll use the online Content Moderator Review tool to test out the basic functionality of Content Moderator without having to write any code. If you wish to integrate this service into your content moderation app more quickly, see the other quickstarts in the [Next steps](#next-steps) section.
-
-## Prerequisites
--- A web browser-
-## Set up the review tool
-The Content Moderator Review tool is a web-based tool that allows human reviewers to aid the cognitive service in making decisions. In this guide, you'll go through the short process of setting up the review tool so that you can see how the Content Moderator service works. Go to the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) site and sign up.
-
-![Content Moderator Home Page](images/homepage.PNG)
-
-## Create a review team
-
-Next, create a review team. In a working scenario, this team will be the group of people who manually review the service's moderation decisions. To create a team, you'll need to select a **Region**, and provide a **Team Name** and a **Team ID**. If you wish to invite colleagues to the team, you can do so by entering their email addresses here.
-
-> [!NOTE]
-> **Team Name** is a friendly name for your review team. This is the name displayed in the Azure portal. The **Team ID** is what's used to identify your review team programatically.
-
-> [!div class="mx-imgBorder"]
-> ![Invite team member](images/create-team.png)
-
-If you choose to encrypt data using a customer-managed key (CMK), you'll be prompted for the **Resource ID** for your Content Moderator resource in the E0 pricing tier. The resource you provide must be unique to this team.
-
-> [!div class="mx-imgBorder"]
-> ![Invite team member with CMK](images/create-team-cmk.png)
-
-## Upload sample content
-
-Now you're ready to upload sample content. Select **Try > Image**, **Try > Text**, or **Try > Video**.
-
-> [!div class="mx-imgBorder"]
-> ![Try Image or Text Moderation](images/tryimagesortext.png)
-
-Submit your content for moderation. You can use the following sample text content:
-
-```
-Is this a grabage email abcdef@abcd.com, phone: 4255550111, IP: 255.255.255.255, 1234 Main Boulevard, Panapolis WA 96555.
-<offensive word> is the profanity here. Is this information PII? phone 4255550111
-```
-
-Internally, the review tool will call the moderation APIs to scan your content. Once the scanning is complete, you'll see a message informing you that there are results waiting for your review.
-
-> [!div class="mx-imgBorder"]
-> ![Moderate files](images/submitted.png)
-
-## Review moderation tags
-
-Review the applied moderation tags. You can see which tags were applied to your content and what the score was in each category. See the [Image](image-moderation-api.md), [Text](text-moderation-api.md), and [Video](video-moderation-api.md) moderation articles to learn more about what the different content tags indicate.
-
-<!-- ![Review results](images/reviewresults_text.png) -->
-
-In a project, you or your review team can change these tags or add more tags as needed. You'll submit these changes with the **Next** button. As your business application calls the Moderator APIs, the tagged content will queue up here, ready to be reviewed by the human review teams. You can quickly review large volumes of content using this approach.
-
-At this point, you've used the Content Moderator Review tool to see examples of what the Content Moderator service can do. Next, you can either learn more about the review tool and how to integrate it into a software project using the Review APIs, or you can skip to the [Next steps](#next-steps) section to learn how to use the Moderation APIs themselves in your app.
-
-## Learn more about the review tool
-
-To learn more about how to use the Content Moderator Review tool, take a look at the [Review tool](Review-Tool-User-Guide/human-in-the-loop.md) guide, and see the Review tool APIs to learn how to fine-tune the human review experience:
-- The [Job API](try-review-api-job.md) scans your content by using the moderation APIs and generates reviews in the review tool. -- The [Review API](try-review-api-review.md) directly creates image, text, or video reviews for human moderators without first scanning the content. -- The [Workflow API](try-review-api-workflow.md) creates, updates, and gets details about the custom workflows that your team creates.-
-Or, continue with the next steps to get started using the Moderation APIs in your code.
-
-## Next steps
-
-Learn how to use the Moderation APIs themselves in your app.
-- Implement image moderation. Use the [API console](try-image-api.md) or follow a [quickstart](client-libraries.md) to scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information.-- Implement text moderation. Use the [API console](try-text-api.md) or follow a [quickstart](client-libraries.md) to scan text content for potential profanity, personal data, and other unwanted text.-- Implement video moderation. Follow the [Video moderation how-to guide for C#](video-moderation-api.md) to scan videos and detect potential adult and racy content.
cognitive-services Review Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/review-api.md
- Title: Reviews, Workflows, and Jobs concepts - Content Moderator-
-description: In this article, you will learn about the core concepts of the Review tool; reviews, workflows, and jobs.
------- Previously updated : 03/14/2019-
-#Customer intent: broad conceptual overview of key concepts
--
-# Content moderation reviews, workflows, and jobs
--
-Content Moderator combines machine-assisted moderation with human-in-the-loop capabilities to create an optimal moderation process for real-world scenarios. It does this through the cloud-based [Review tool](https://contentmoderator.cognitive.microsoft.com). In this guide, you'll learn about the core concepts of the Review tool: reviews, workflows, and jobs.
-
-## Reviews
-
-In a review, content is uploaded to the Review tool. You can view it by clicking its content type under **Review** tab on the dashboard. From the review screen, you can alter the applied tags and apply your own custom tags as appropriate. When you submit a review, the results are sent to a specified callback endpoint, and the content is removed from the site.
-
-> [!div class="mx-imgBorder"]
-> ![The Review drop-down menu is highlighted. It shows these content types: Image, Text, and Video.](./Review-Tool-user-Guide/images/review-tab.png)
-
-### Manage reviews
-
-From the dashboard, navigate to **Admin** -> **Manage Reviews** to view the admin screen. Here, you can see a list of all reviews (pending and completed).
-
-The three-dot **Actions** button on each review lets you go to the review screen or inspect the history of that review.
-
-> [!div class="mx-imgBorder"]
-> ![Review tool website, on the Review screen](./Review-Tool-user-Guide/images/manage-reviews.png)
-
-Use the **Search** toolbar to sort the reviews by a variety of categories such as review state, tags, content type, subteams, assigned users, and created/modified date.
-
-> [!div class="mx-imgBorder"]
-> ![The Search toolbar is shown. It has various combo boxes for entering search criteria, such as Review State and Tags.](./Review-Tool-user-Guide/images/review-search.png)
-
-See the [Review tool guide](./review-tool-user-guide/review-moderated-images.md) to get started creating reviews, or see the [API console guide](./try-review-api-review.md) to learn how to do so programmatically.
-
-## Workflows
-
-A workflow is a cloud-based customized filter for content. Workflows can connect to a variety of services to filter content in different ways and then take the appropriate action. With the Content Moderator connector, a workflow can automatically apply moderation tags and create reviews with submitted content.
-
-### View workflows
-
-To view your existing workflows, go to the [Review tool](https://contentmoderator.cognitive.microsoft.com/) and select **Admin** > **Workflows**.
-
-> [!div class="mx-imgBorder"]
-> ![Default workflow](images/default-workflow-list.png)
-
-Workflows are defined as JSON strings, which makes them accessible programmatically. If you select the **Edit** option for your workflow and then select the **JSON** tab, you'll see a JSON expression like the following:
-
-```json
-{
- "Type": "Logic",
- "If": {
- "ConnectorName": "moderator",
- "OutputName": "isAdult",
- "Operator": "eq",
- "Value": "true",
- "Type": "Condition"
- },
- "Then": {
- "Perform": [
- {
- "Name": "createreview",
- "CallbackEndpoint": null,
- "Tags": []
- }
- ],
- "Type": "Actions"
- }
-}
-```
-
-See the [Review tool guide](./review-tool-user-guide/workflows.md) to get started creating and using workflows, or see the [API console guide](./try-review-api-workflow.md) to learn how to do so programmatically.
-
-## Jobs
-
-A moderation job serves as a kind of wrapper for the functionality of content moderation, workflows, and reviews. The job scans your content using the Content Moderator image moderation API or text moderation API and then checks it against the designated workflow. Based on the workflow results, it may or may not create a review for the content in the [Review tool](./review-tool-user-guide/human-in-the-loop.md). While both reviews and workflows can be created and configured with their respective APIs, the job API allows you to obtain a detailed report of the entire process (which can be sent to a specified callback endpoint).
-
-See the [API console guide](./try-review-api-job.md) to get started using jobs.
-
-## Next steps
-
-* Test drive the [Job API console](try-review-api-job.md), and use the REST API code samples. If you're familiar with Visual Studio and C#, also check out the [Jobs .NET quickstart](moderation-jobs-quickstart-dotnet.md).
-* For reviews, get started with the [Review API console](try-review-api-review.md), and use the REST API code samples. Then see the reviews section of the [.NET quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
-* For video reviews, use the [Video review quickstart](video-reviews-quickstart-dotnet.md), and learn how to [add transcripts to the video review](video-transcript-reviews-quickstart-dotnet.md).
cognitive-services Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/samples-dotnet.md
ms.devlang: csharp
The following list includes links to the code samples built using the Azure Content Moderator SDK for .NET.
-## Moderation
- - **Image moderation**: [Evaluate an image for adult and racy content, text, and faces](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp). - **Custom images**: [Moderate with custom image lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageListManagement/Program.cs). See the[.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
The following list includes links to the code samples built using the Azure Cont
- **Video moderation**: [Scan a video for adult and racy content and get results](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/VideoModeration/Program.cs). See [quickstart](video-moderation-api.md).
-## Review
--- **Image jobs**: [Start a moderation job that scans and creates reviews](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageJobs/Program.cs). See [quickstart](moderation-jobs-quickstart-dotnet.md).-- **Image reviews**: [Create reviews for human-in-the-loop](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageReviews/Program.cs). See [quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).-- **Video reviews**: [Create video reviews for human-in-the-loop](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/VideoReviews/Program.cs). See [quickstart](video-reviews-quickstart-dotnet.md)-- **Video transcript reviews**: [Create video transcript reviews for human-in-the-loop](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/VideoTranscriptReviews/Program.cs) See [quickstart](video-reviews-quickstart-dotnet.md)
-See all .NET samples at the [Content Moderator .NET samples on GitHub](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator).
+See all .NET samples at the [Content Moderator .NET samples on GitHub](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator).
cognitive-services Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/samples-rest.md
The following list includes links to code samples built using the Azure Content
- [Image moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageModeration) - [Text moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/TextModeration) - [Video moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/VideoModeration)-- [Image reviews](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageReviews)-- [Image jobs](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageJob) For walkthroughs of these samples, check out the [on-demand webinar](https://info.microsoft.com/cognitive-services-content-moderator-ondemand.html).
cognitive-services Term Lists Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/term-lists-quickstart-dotnet.md
# Check text against a custom term list in C#
-The default global list of terms in Azure Content Moderator is sufficient for most content moderation needs. However, you might need to screen for terms that are specific to your organization. For example, you might want to tag competitor names for further review.
+The default global list of terms in Azure Content Moderator is sufficient for most content moderation needs. However, you might need to screen for terms that are specific to your organization.
You can use the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to create custom lists of terms to use with the Text Moderation API.
cognitive-services Text Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/text-moderation-api.md
# Learn text moderation concepts
-Use Content Moderator's text moderation models to analyze text content.
-
-You can block, approve or review the content based on your policies and thresholds (see [Reviews, workflows, and jobs](./review-api.md) to learn how to set up human reviews). Use the text moderation models to augment human moderation of environments where partners, employees and consumers generate text content. These include chat rooms, discussion boards, chatbots, e-commerce catalogs, and documents.
+Use Content Moderator's text moderation models to analyze text content, such as chat rooms, discussion boards, chatbots, e-commerce catalogs, and documents.
The service response includes the following information:
If the API detects any profane terms in any of the [supported languages](./langu
## Classification
-Content Moderator's machine-assisted **text classification feature** supports **English only**, and helps detect potentially undesired content. The flagged content may be assessed as inappropriate depending on context. It conveys the likelihood of each category and may recommend a human review. The feature uses a trained model to identify possible abusive, derogatory or discriminatory language. This includes slang, abbreviated words, offensive, and intentionally misspelled words for review.
+Content Moderator's machine-assisted **text classification feature** supports **English only**, and helps detect potentially undesired content. The flagged content may be assessed as inappropriate depending on context. It conveys the likelihood of each category. The feature uses a trained model to identify possible abusive, derogatory or discriminatory language. This includes slang, abbreviated words, offensive, and intentionally misspelled words.
The following extract in the JSON extract shows an example output:
The Content Moderator provides a [Term List API](https://westus.dev.cognitive.mi
## Next steps
-Test out the APIs with the [Text moderation API console](try-text-api.md). Also see [Reviews, workflows, and jobs](./review-api.md) to learn how to set up human reviews.
+Test out the APIs with the [Text moderation API console](try-text-api.md).
cognitive-services Try Image Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/try-image-api.md
Title: Moderate images with the API Console - Content Moderator
-description: Use the Image Moderation API in Azure Content Moderator to initiate scan-and-review moderation workflows for image content.
+description: Use the Image Moderation API in Azure Content Moderator to scan image content.
# Moderate images from the API console
-Use the [Image Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c) in Azure Content Moderator to initiate scan-and-review moderation workflows for image content. The moderation job scans your content for profanity, and compares it against custom and shared blocklists.
+Use the [Image Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c) in Azure Content Moderator to scan image content. The moderation job scans your content for profanity, and compares it against custom and shared blocklists.
## Use the API console Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
cognitive-services Try Review Api Job https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/try-review-api-job.md
- Title: Use moderation jobs with the REST API console - Content Moderator-
-description: Use the Review API's job operations to initiate end-to-end content moderation jobs for image or text content in Azure Content Moderator.
------- Previously updated : 10/24/2019-
-#The Jobs how-to for REST/console
--
-# Define and use moderation jobs (API console)
--
-A moderation job serves as a kind of wrapper for the functionality of content moderation, workflows, and reviews. This guide shows you how to use the job REST APIs to initiate and check content moderation jobs. Once you understand the structure of the APIs, you can easily port these calls to any REST-compatible platform.
-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-- (Optional) [Define a custom workflow](./Review-Tool-User-Guide/Workflows.md) to use with your job; you can also use the default workflow.-
-## Create a job
-
-To create a moderation job, go to the [Job - Create](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c5) API reference page and select the button for your subscription region. You can find your region in the Endpoint URL on the **Credentials** page of the [Review tool](https://contentmoderator.cognitive.microsoft.com/). This starts the API console, where you can easily construct and run REST API calls.
-
-![Job - Create page region selection](images/test-drive-job-1.png)
-
-### Enter REST call parameters
-
-Enter the following values to construct the REST call:
--- **teamName**: The team ID that you created when you set up your [Review tool](https://contentmoderator.cognitive.microsoft.com/) account (found in the **Id** field on your Review tool's Credentials screen).-- **ContentType**: This can be "Image", "Text", or "Video".-- **ContentId**: A custom identifier string. This string is passed to the API and returned through the callback. It is useful for associating internal identifiers or metadata with the results of a moderation job.-- **Workflowname**: The name of the workflow you previously created (or "default" for the default workflow).-- **CallbackEndpoint**: (Optional) The URL to receive callback information when the review is completed.-- **Ocp-Apim-Subscription-Key**: Your Content Moderator key. You can find this key on the **Settings** tab of the [Review tool](https://contentmoderator.cognitive.microsoft.com).-
-### Fill in the request body
-
-The body of your REST call contains one field, **ContentValue**. Paste in the raw text content if you are moderating text, or enter an image or video URL if you're moderating an image or video. You can use the following sample image URL: [https://moderatorsampleimages.blob.core.windows.net/samples/sample2.jpg](https://moderatorsampleimages.blob.core.windows.net/samples/sample2.jpg)
-
-![Job - Create console query parameters, headers, and Request body box](images/job-api-console-inputs.PNG)
-
-### Submit your request
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays an ID for the job. Copy this ID to use in the following steps.
-
-![Review - Create console Response content box displays the review ID](images/test-drive-job-3.PNG)
-
-## Get job status
-
-To get the status and details of a running or completed job, go to the [Job - Get](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c3) API reference page and select the button for your region (the region in which your key is administered).
-
-![Job - Get region selection](images/test-drive-region.png)
-
-Enter the REST call parameters as in the above section. For this step, **JobId** is the unique ID string you received when you created the job. Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays the job in JSON format, like the following:
-
-```json
-{
- "Id":"2018014caceddebfe9446fab29056fd8d31ffe",
- "TeamName":"some team name",
- "Status":"Complete",
- "WorkflowId":"OCR",
- "Type":"Image",
- "CallBackEndpoint":"",
- "ReviewId":"201801i28fc0f7cbf424447846e509af853ea54",
- "ResultMetaData":[
- {
- "Key":"hasText",
- "Value":"True"
- },
- {
- "Key":"ocrText",
- "Value":"IF WE DID \r\nALL \r\nTHE THINGS \r\nWE ARE \r\nCAPABLE \r\nOF DOING, \r\nWE WOULD \r\nLITERALLY \r\nASTOUND \r\nOURSELVE \r\n"
- }
- ],
- "JobExecutionReport":[
- {
- "Ts":"2018-01-07T00:38:29.3238715",
- "Msg":"Posted results to the Callbackendpoint: https://requestb.in/vxke1mvx"
- },
- {
- "Ts":"2018-01-07T00:38:29.2928416",
- "Msg":"Job marked completed and job content has been removed"
- },
- {
- "Ts":"2018-01-07T00:38:29.0856472",
- "Msg":"Execution Complete"
- },
- {
- "Ts":"2018-01-07T00:38:26.7714671",
- "Msg":"Successfully got hasText response from Moderator"
- },
- {
- "Ts":"2018-01-07T00:38:26.4181346",
- "Msg":"Getting hasText from Moderator"
- },
- {
- "Ts":"2018-01-07T00:38:25.5122828",
- "Msg":"Starting Execution - Try 1"
- }
- ]
-}
-```
-
-![Job - Get REST call response](images/test-drive-job-5.png)
-
-### Examine the new review(s)
-
-If your content job resulted in the creation of a review, you can view it in the [Review tool](https://contentmoderator.cognitive.microsoft.com). Select **Review** > **Image**/**Text**/**Video** (depending on what content you used). The content should appear, ready for human review. After a human moderator reviews the auto-assigned tags and prediction data and submits a final moderation decision, the jobs API submits all of this information to the designated callback endpoint.
-
-## Next steps
-
-In this guide, you learned how to create and query content moderation jobs using the REST API. Next, integrate jobs into an end-to-end moderation scenario, such as the [E-commerce moderation](./ecommerce-retail-catalog-moderation.md) tutorial.
cognitive-services Try Review Api Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/try-review-api-review.md
- Title: Create moderation reviews with REST API console - Content Moderator-
-description: Use the Azure Content Moderator Review APIs to create image or text reviews for human moderation.
------- Previously updated : 03/18/2019-
-# reviews how-to for REST API
--
-# Create human reviews (API console)
--
-[Reviews](./review-api.md#reviews) store and display content for human moderators to assess. When a user completes a review, the results are sent to a specified callback endpoint. In this guide, you'll learn how to set up reviews using the review REST APIs through the API console. Once you understand the structure of the APIs, you can easily port these calls to any REST-compatible platform.
-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-
-## Create a review
-
-To create a review, go to the **[Review - Create](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4)** API reference page and select the button for your key region (you can find this in the Endpoint URL on the **Credentials** page of the [Review tool](https://contentmoderator.cognitive.microsoft.com/)). This starts the API console, where you can easily construct and run REST API calls.
-
-![Review - Get region selection](images/test-drive-region.png)
-
-### Enter REST call parameters
-
-Enter values for **teamName**, and **Ocp-Apim-Subscription-Key**:
--- **teamName**: The team ID that you created when you set up your [Review tool](https://contentmoderator.cognitive.microsoft.com/) account (found in the **Id** field on your Review tool's Credentials screen).-- **Ocp-Apim-Subscription-Key**: Your Content Moderator key. You can find this on the **Settings** tab of the [Review tool](https://contentmoderator.cognitive.microsoft.com).-
-### Enter a review definition
-
-Edit the **Request body** box to enter the JSON request with the following fields:
--- **Metadata**: Custom key-value pairs to be returned to your callback endpoint. If the key is a short code that is defined in the [Review tool](https://contentmoderator.cognitive.microsoft.com), it appears as a tag.-- **Content**: In the case of Image and Video content, this is a URL string pointing to the content. For text content, this is the actual text string.-- **ContentId**: A custom identifier string. This string is passed to the API and returned through the callback. It is useful for associating internal identifiers or metadata with the results of a moderation job.-- **CallbackEndpoint**: (Optional) The URL to receive callback information when the review is completed.-
-The default request body shows examples of the different types of reviews you can create:
-
-```json
-[Image]
-[
- {
- "Metadata": [
- {
- "Key": "string",
- "Value": "string"
- }
- ],
- "Type": "Image",
- "Content": "<Content Url>",
- "ContentId": "<Your identifier for this content>",
- "CallbackEndpoint": "<Url where you would receive callbacks>"
- }
-]
-[Text]
-[
- {
- "Metadata": [
- {
- "Key": "string",
- "Value": "string"
- }
- ],
- "Type": "Text",
- "Content": "<Your Text Content>",
- "ContentId": "<Your identifier for this content>",
- "CallbackEndpoint": "<Url where you would receive callbacks>"
- }
-]
-[Video]
-[
- {
- "VideoFrames":[
- {
- "Id": "<Frame Id>",
- "Timestamp": "<Frame Timestamp",
- "FrameImage":"<Frame Image URL",
- "Metadata": [
- {
- "Key": "<Key>",
- "Value": "<Value"
- }
- ],
- "ReviewerResultTags": [
- ]
- ],
- "Metadata": [
- {
- "Key": "string",
- "Value": "string"
- },
- //For encrypted Videos
- {
- "Key": "protectedType",
- "Value": "AES or FairPlay or Widevine or Playready"
- },
- {
- "Key": "authenticationToken",
- "Value": "your viewtoken(In case of Video Indexer AES encryption type, this value is viewtoken from breakdown json)"
- },
- //For FairPlay encrypted type video include certificateUrl as well
- {
- "Key": "certificateUrl",
- "Value": "your certificate url"
- }
- ],
- "Type": "Video",
- "Content": "<Stream Url>",
- "ContentId": "<Your identifier for this content>",
- "CallbackEndpoint": "<Url where you would receive callbacks>",
- [Optional]
- "Timescale": "<Timescale of the video>
- }
-]
-```
-
-### Submit your request
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays an ID for the review. Copy this ID to use in the following steps.
-
-![Review - Create console Response content box displays the review ID](images/test-drive-review-2.PNG)
-
-### Examine the new review
-
-In the [Review tool](https://contentmoderator.cognitive.microsoft.com), select **Review** > **Image**/**Text**/**Video** (depending on what content you used). The content that you uploaded should appear, ready for human review.
-
-![Review tool image of a soccer ball](images/test-drive-review-5.PNG)
-
-## Get review details
-
-To retrieve details about an existing review, go to the [Review - Get](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c2) API reference page and select the button for your region (the region in which your key is administered).
-
-![Workflow - Get region selection](images/test-drive-region.png)
-
-Enter the REST call parameters as in the above section. For this step, **reviewId** is the unique ID string you received when you created the review.
-
-![Review - Create console Get results](images/test-drive-review-3.PNG)
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays the review details in JSON format, like the following:
-
-```json
-{
- "reviewId":"201712i46950138c61a4740b118a43cac33f434",
- "subTeam":"public",
- "status":"Complete",
- "reviewerResultTags":[
- {
- "key":"a",
- "value":"False"
- },
- {
- "key":"r",
- "value":"True"
- },
- {
- "key":"sc",
- "value":"True"
- }
- ],
- "createdBy":"<teamname>",
- "metadata":[
- {
- "key":"sc",
- "value":"true"
- }
- ],
- "type":"Image",
- "content":"https://reviewcontentprod.blob.core.windows.net/<teamname>/IMG_201712i46950138c61a4740b118a43cac33f434",
- "contentId":"0",
- "callbackEndpoint":"<callbackUrl>"
-}
-```
-
-Take note of the following fields in the response:
--- **status**-- **reviewerResultTags**: This appears if any tags have been manually added by the human review team (shown the **createdBy** field).-- **metadata**: This shows the tags that were initially added in the review, before the human review team made changes.-
-## Next steps
-
-In this guide, you learned how to create content moderation reviews using the REST API. Next, integrate reviews into an end-to-end moderation scenario, such as the [E-commerce moderation](./ecommerce-retail-catalog-moderation.md) tutorial.
cognitive-services Try Review Api Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/try-review-api-workflow.md
- Title: Define moderation workflows with the REST API console - Content Moderator-
-description: You can use the Azure Content Moderator Review APIs to define custom workflows and thresholds based on your content policies.
------- Previously updated : 03/14/2019-
-#Customer intent: use workflows from the REST API
--
-# Define and use moderation workflows (API console)
--
-Workflows are cloud-based customized filters that you can use to handle content more efficiently. Workflows can connect to a variety of services to filter content in different ways and then take the appropriate action. This guide shows you how to use the workflow REST APIs, through the API console, to create and use workflows. Once you understand the structure of the APIs, you can easily port these calls to any REST-compatible platform.
-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-
-## Create a workflow
-
-To create or update a workflow, go to the **[Workflow - Create Or Update](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b46b3f9b0711b43c4c59)** API reference page and select the button for your key region. You can find your region in the Endpoint URL on the **Credentials** page of the [Review tool](https://contentmoderator.cognitive.microsoft.com/). This starts the API console, where you can easily construct and run REST API calls.
-
-![Workflow - Create Or Update page region selection](images/test-drive-region.png)
-
-### Enter REST call parameters
-
-Enter values for **team**, **workflowname**, and **Ocp-Apim-Subscription-Key**:
--- **team**: The team ID that you created when you set up your [Review tool](https://contentmoderator.cognitive.microsoft.com/) account (found in the **Id** field on your Review tool's Credentials screen).-- **workflowname**: The name of a new workflow to add (or an existing name, if you want to update an existing workflow).-- **Ocp-Apim-Subscription-Key**: Your Content Moderator key. You can find this key on the **Settings** tab of the [Review tool](https://contentmoderator.cognitive.microsoft.com).-
-![Workflow - Create Or Update console query parameters and headers](images/workflow-console-parameters.PNG)
-
-### Enter a workflow definition
-
-1. Edit the **Request body** box to enter the JSON request with details for **Description** and **Type** (either `Image` or `Text`).
-2. For **Expression**, copy the default workflow JSON expression. Your final JSON string should look like this:
-
-```json
-{
- "Description":"<A description for the Workflow>",
- "Type":"Text",
- "Expression":{
- "Type":"Logic",
- "If":{
- "ConnectorName":"moderator",
- "OutputName":"isAdult",
- "Operator":"eq",
- "Value":"true",
- "Type":"Condition"
- },
- "Then":{
- "Perform":[
- {
- "Name":"createreview",
- "CallbackEndpoint":null,
- "Tags":[
-
- ]
- }
- ],
- "Type":"Actions"
- }
- }
-}
-```
-
-> [!NOTE]
-> You can define simple, complex, and even nested expressions for your workflows using this API. The [Workflow - Create Or Update](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b46b3f9b0711b43c4c59) documentation has examples of more complex logic.
-
-### Submit your request
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays `true`.
-
-### Examine the new workflow
-
-In the [Review tool](https://contentmoderator.cognitive.microsoft.com/), select **Settings** > **Workflows**. Your new workflow should appear in the list.
-
-![Review tool list of workflows](images/workflow-console-new-workflow.PNG)
-
-Select the **Edit** option for your workflow and go to the **Designer** tab. Here, you can see an intuitive representation of the JSON logic.
-
-![Designer tab for a selected workflow](images/workflow-console-new-workflow-designer.PNG)
-
-## Get workflow details
-
-To retrieve details about an existing workflow, go to the **[Workflow - Get](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b44b3f9b0711b43c4c58)** API reference page and select the button for your region (the region in which your key is administered).
-
-![Workflow - Get region selection](images/test-drive-region.png)
-
-Enter the REST call parameters as in the above section. Make sure that this time, **workflowname** is the name of an existing workflow.
-
-![Get query parameters and headers](images/workflow-get-default.PNG)
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays the workflow in JSON format, like the following example:
-
-```json
-{
- "Name":"default",
- "Description":"Default",
- "Type":"Image",
- "Expression":{
- "If":{
- "ConnectorName":"moderator",
- "OutputName":"isadult",
- "Operator":"eq",
- "Value":"true",
- "AlternateInput":null,
- "Type":"Condition"
- },
- "Then":{
- "Perform":[
- {
- "Name":"createreview",
- "Subteam":null,
- "CallbackEndpoint":null,
- "Tags":[
-
- ]
- }
- ],
- "Type":"Actions"
- },
- "Else":null,
- "Type":"Logic"
- }
-}
-```
-
-## Next steps
--- Learn how to use workflows with [content moderation jobs](try-review-api-job.md).
cognitive-services Video Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/video-moderation-api.md
After the Content Moderation job is completed, analyze the JSON response. It con
## Next steps
-Learn how to generate [video reviews](video-reviews-quickstart-dotnet.md) from your moderation output.
- [Download the Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET.
cognitive-services Video Moderation Human Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/video-moderation-human-review.md
- Title: Video moderation with the Review tool - Content Moderator-
-description: Use machine-assisted video moderation and the Review tool to moderate inappropriate content
------- Previously updated : 07/20/2020---
-# Video moderation with the Review tool
--
-Use Content Moderator's machine-assisted [video moderation](video-moderation-api.md) and [Review tool](Review-Tool-User-Guide/human-in-the-loop.md) to moderate videos and transcripts for adult (explicit) and racy (suggestive) content to get the best results for your business.
-
-## View videos under review
-
-On the dashboard, select any of the review queues within the video content type. This will start a review and open the video content moderation page.
-
-> [!div class="mx-imgBorder"]
-> ![On Content Moderator (Preview), the slider is highlighted and set to 4 reviews. The Blur All and Black and white switches are highlighted, and both are set.](./Review-Tool-User-Guide/images/video-moderation-detailed.png)
-
-### Review count
-
-Use the slider in the upper right to set the number of reviews you'd like to display on the page.
-
-### View type
-
-You can view the different content entries as tiles or in a detailed view. The **Detail** view will allow you to see key frames and other information about the selected video.
-
-> [!NOTE]
-> Instead of outputting frames at regular intervals, the video moderation service identifies and outputs only potentially complete (good) frames. This feature allows efficient frame generation for frame-level adult and racy analysis.
-
-The **Tiled** view will show each video as a single tile. Select the expand button above a video frame to enlarge that video and hide the others.
-
-### Content-obscuring effects
-
-Use the **Blur all** and **Black and white** toggles to set these content-obscuring effects. They're turned on by default. In the **Tiled** view, you can toggle the effects individually for each video.
-
-## Check video details
-
-In the **Detail** view, the right pane will show several tabs that give you details about the video.
-
-* Select the **Notes** tab to add custom notes to videos.
-* Select the **Transcript** tab to see the video transcript&mdash;the service automatically extracts a transcript of any speech in the video. When you select a section of text, the video player will jump to that part of the video.
-* Select the **Meta-data** tab to view video file metadata.
-* Select the **History** tab to see the history of the review, such as when it was created and how it was modified.
-
-> [!div class="mx-imgBorder"]
-> ![The right pane is highlighted, and the Notes tab is selected. There is a test area labelled Add a note.](./Review-Tool-User-Guide/images/video-moderation-video-details.png)
-
-## Apply moderation tags
-
-The main task of a video review is to apply or remove moderation tags on videos or parts of videos.
-
-### Bulk tagging
-
-The **Bulk Tags** toolbar lets you add tags to multiple selected videos at once. Select one or more videos, then select the tags you would like to apply and click **submit**.
-
-> [!div class="mx-imgBorder"]
-> ![The + button is highlighted in the Bulk Tags pane.](./Review-Tool-User-Guide/images/video-moderation-bulk-tags.png)
--
-### Key frame tagging
-
-You can also add moderation tags to specific key frames. Select the frames from the key frame tile pane, and then select **Keyframe tags +** to apply the wanted tags.
-
-> [!NOTE]
-> If the service couldn't extract key frames, the key frame tile pane will show **No frames available** and the option to select key frames will be grayed out. In this case, you can only apply tags to the video as a whole (using the **Video tags +** button).
-
-> [!div class="mx-imgBorder"]
-> ![The tile pane, video player, Keyframe tags pane, and Video tags panes are all shown. The Keyframe tags + and Video tags + buttons are highlighted.](./Review-Tool-User-Guide/images/video-moderation-tagging-options.png)
-
-## Put a review on hold
-
-The **Hold** button at the bottom of the video pane lets you put a review on hold so you can retrieve it and complete it later. You may do this for a review that requires a consult from another team member or manager who is currently unavailable.
-
-You can view the videos on hold by clicking the **Hold** button at the top of the screen. The Hold pane appears on the right. From here, you can select multiple reviews on hold and either release them back into the queue, or set their expiration time. After the preconfigured amount of time, reviews on hold are released back to the queue. Select **Save** to start counting down from the currently selected expiration time.
-
-> [!div class="mx-imgBorder"]
-> ![On the video pane, the Hold button is highlighted. At the bottom of the pane, a Hold Time combo box is highlighted, along with Release and Save buttons.](./Review-Tool-User-Guide/images/video-moderation-hold.png)
-
-## Submit a review
-
-After you've applied your tags, select the **Submit** button at the bottom of the video pane. If you've tagged multiple videos, you can submit them under a single review or as separate reviews.
-
-## Limbo state
-
-After you've submitted a review, the video is moved to the **Limbo** state, which you can view by selecting the **Limbo** button at the top of the screen. Videos remain in the Limbo state for a preconfigured amount of time (which you can change in the menu at the bottom), or until they're reviewed again or manually submitted.
-
-Once the videos expire from limbo, their reviews are marked as complete.
-
-## Next steps
--- Get started with the [video moderation quickstart](video-moderation-api.md).-- Learn how to generate [video reviews](video-reviews-quickstart-dotnet.md) for your human reviewers from your moderated output.-- Add [video transcript reviews](video-transcript-reviews-quickstart-dotnet.md) to your video reviews.
cognitive-services Video Reviews Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/video-reviews-quickstart-dotnet.md
- Title: Create video reviews using .NET - Content Moderator-
-description: This article provides information and code samples to help you quickly get started using the Content Moderator SDK with C# to create video reviews.
------- Previously updated : 10/24/2019-----
-# Create video reviews using .NET
--
-This article provides information and code samples to help you quickly get started using the [Content Moderator SDK with C#](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to:
--- Create a video review for human moderators-- Add frames to a review-- Get the frames for the review-- Get the status and details of the review-- Publish the review-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-- This article assumes that you have [moderated the video (see quickstart)](video-moderation-api.md) and have the response data. You need it for creating frame-based reviews for human moderators.-
-## Ensure your API key can call the review API for review creation
-
-After completing the previous steps, you may end up with two Content Moderator keys if you started from the Azure portal.
-
-If you plan to use the Azure-provided API key in your SDK sample, follow the steps mentioned in the [Using Azure key with the review API](review-tool-user-guide/configure.md#use-your-azure-account-with-the-review-apis) section to allow your application to call the review API and create reviews.
-
-If you use the free trial key generated by the review tool, your review tool account already knows about the key and therefore, no additional steps are required.
-
-### Prepare your video and the video frames for review
-
-The video and sample video frames to review must be published online because you need their URLs.
-
-> [!NOTE]
-> The program uses manually saved screenshots from the video with random adult/racy scores to illustrate the
-> use of the review API. In a real-world situation, you use the [video moderation output](video-moderation-api.md#run-the-program-and-review-the-output) to create images and assign scores.
-
-For the video, you need a streaming endpoint so that the review tool plays the video in the player view.
-
-![Video demo thumbnail](images/ams-video-demo-view.PNG)
--- Copy the **URL** on this [Azure Media Services demo](https://aka.ms/azuremediaplayer?url=https%3A%2F%2Famssamples.streaming.mediaservices.windows.net%2F91492735-c523-432b-ba01-faba6c2206a2%2FAzureMediaServicesPromo.ism%2Fmanifest) page for the manifest URL.-
-For the video frames (images), use the following images:
-
-![Video frame thumbnail 1](images/ams-video-frame-thumbnails-1.PNG) | ![Video frame thumbnail 2](images/ams-video-frame-thumbnails-2.PNG) | ![Video frame thumbnail 3](images/ams-video-frame-thumbnails-3.PNG) |
-| :: | :: | :: |
-Frame 1 | Frame 2 | Frame 3 |
-
-## Create your Visual Studio project
-
-1. Add a new **Console app (.NET Framework)** project to your solution.
-
-1. Name the project **VideoReviews**.
-
-1. Select this project as the single startup project for the solution.
-
-### Install required packages
-
-Install the following NuGet packages for the TermLists project.
--- Microsoft.Azure.CognitiveServices.ContentModerator-- Microsoft.Rest.ClientRuntime-- Microsoft.Rest.ClientRuntime.Azure-- Newtonsoft.Json-
-### Update the program's using statements
-
-Modify the program's using statements as follows.
-
-```csharp
-using System;
-using System.Collections.Generic;
-using System.IO;
-using System.Threading;
-using Microsoft.Azure.CognitiveServices.ContentModerator;
-using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
-using Newtonsoft.Json;
-```
-
-### Add private properties
-
-Add the following private properties to namespace **VideoReviews**, class **Program**. Update the `AzureEndpoint` and `CMSubscriptionKey` fields with the values of your endpoint URL and subscription key. You can find these in the **Quick start** tab of your resource in the Azure portal.
--
-```csharp
-namespace VideoReviews
-{
- class Program
- {
- // NOTE: Enter a valid endpoint URL
- /// <summary>
- /// The endpoint URL of your subscription
- /// </summary>
- private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
-
- // NOTE: Enter a valid subscription key.
- /// <summary>
- /// Your Content Moderator subscription key.
- /// </summary>
- private static readonly string CMSubscriptionKey = "YOUR CONTENT MODERATOR KEY";
-
- // NOTE: Replace this example team name with your Content Moderator team name.
- /// <summary>
- /// The name of the team to assign the job to.
- /// </summary>
- /// <remarks>This must be the team name you used to create your
- /// Content Moderator account. You can retrieve your team name from
- /// the Content Moderator web site. Your team name is the Id associated
- /// with your subscription.</remarks>
- private const string TeamName = "YOUR CONTENT MODERATOR TEAM ID";
-
- /// <summary>
- /// The minimum amount of time, in milliseconds, to wait between calls
- /// to the Content Moderator APIs.
- /// </summary>
- private const int throttleRate = 2000;
-```
-
-### Create Content Moderator Client object
-
-Add the following method definition to namespace **VideoReviews**, class **Program**.
-
-```csharp
-/// <summary>
-/// Returns a new Content Moderator client for your subscription.
-/// </summary>
-/// <returns>The new client.</returns>
-/// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
-/// When you have finished using the client,
-/// you should dispose of it either directly or indirectly. </remarks>
-public static ContentModeratorClient NewClient()
-{
- return new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey))
- {
- Endpoint = AzureEndpoint
- };
-}
-```
-
-## Create a video review
-
-Create a video review with **ContentModeratorClient.Reviews.CreateVideoReviews**. For more information, see the [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4).
-
-**CreateVideoReviews** has the following required parameters:
-1. A string that contains a MIME type, which should be "application/json."
-1. Your Content Moderator team name.
-1. An **IList\<CreateVideoReviewsBodyItem>** object. Each **CreateVideoReviewsBodyItem** object represents a video review. This quickstart creates one review at a time.
-
-**CreateVideoReviewsBodyItem** has several properties. At a minimum, you set the following properties:
-- **Content**. The URL of the video to be reviewed.-- **ContentId**. An ID to assign to the video review.-- **Status**. Set the value to "Unpublished." If you do not set it, it defaults to "Pending", which means the video review is published and pending human review. Once a video review is published, you can no longer add video frames, a transcript, or a transcript moderation result to it.-
-> [!NOTE]
-> **CreateVideoReviews** returns an IList\<string>. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Create a video review. For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="id">The ID to assign to the video review.</param>
-/// <param name="content">The URL of the video to review.</param>
-/// <returns>The ID of the video review.</returns>
-private static string CreateReview(ContentModeratorClient client, string id, string content)
-{
- Console.WriteLine("Creating a video review.");
-
- List<CreateVideoReviewsBodyItem> body = new List<CreateVideoReviewsBodyItem>() {
- new CreateVideoReviewsBodyItem
- {
- Content = content,
- ContentId = id,
- /* Note: to create a published review, set the Status to "Pending".
- However, you cannot add video frames or a transcript to a published review. */
- Status = "Unpublished",
- }
- };
-
- var result = client.Reviews.CreateVideoReviews("application/json", TeamName, body);
-
- Thread.Sleep(throttleRate);
-
- // We created only one review.
- return result[0];
-}
-```
-
-> [!NOTE]
-> Your Content Moderator service key has a requests per second (RPS) rate limit, and if you exceed the limit, the SDK throws an exception with a 429 error code.
->
-> A free tier key has a one RPS rate limit.
-
-## Add video frames to the video review
-
-You add video frames to a video review with **ContentModeratorClient.Reviews.AddVideoFrameUrl** (if your video frames are hosted online) or **ContentModeratorClient.Reviews.AddVideoFrameStream** (if your video frames are hosted locally). This quickstart assumes your video frames are hosted online, and so uses **AddVideoFrameUrl**. For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b76ae7151f0b10d451fd).
-
-**AddVideoFrameUrl** has the following required parameters:
-1. A string that contains a MIME type, which should be "application/json."
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-1. An **IList\<VideoFrameBodyItem>** object. Each **VideoFrameBodyItem** object represents a video frame.
-
-**VideoFrameBodyItem** has the following properties:
-- **Timestamp**. A string that contains, in seconds, the time in the video from which the video frame was taken.-- **FrameImage**. The URL of the video frame.-- **Metadata**. An IList\<VideoFrameBodyItemMetadataItem>. **VideoFrameBodyItemMetadataItem** is simply a key/value pair. Valid keys include:-- **reviewRecommended**. True if a human review of the video frame is recommended.-- **adultScore**. A value from 0 to 1 that rates the severity of adult content in the video frame.-- **a**. True if the video contains adult content.-- **racyScore**. A value from 0 to 1 that rates the severity of racy content in the video frame.-- **r**. True if the video frame contains racy content.-- **ReviewerResultTags**. An IList\<VideoFrameBodyItemReviewerResultTagsItem>. **VideoFrameBodyItemReviewerResultTagsItem** is simply a key/value pair. An application can use these tags to organize video frames.-
-> [!NOTE]
-> This quickstart generates random values for the **adultScore** and **racyScore** properties. In a production application, you would obtain these values from the [video moderation service](video-moderation-api.md), deployed as an Azure Media Service.
-
-Add the following method definitions to namespace VideoReviews, class Program.
-
-```csharp
-<summary>
-/// Create a video frame to add to a video review after the video review is created.
-/// </summary>
-/// <param name="url">The URL of the video frame image.</param>
-/// <returns>The video frame.</returns>
-private static VideoFrameBodyItem CreateFrameToAddToReview(string url, string timestamp_seconds)
-{
- // We generate random "adult" and "racy" scores for the video frame.
- Random rand = new Random();
-
- var frame = new VideoFrameBodyItem
- {
- // The timestamp is measured in milliseconds. Convert from seconds.
- Timestamp = (int.Parse(timestamp_seconds) * 1000).ToString(),
- FrameImage = url,
-
- Metadata = new List<VideoFrameBodyItemMetadataItem>
- {
- new VideoFrameBodyItemMetadataItem("reviewRecommended", "true"),
- new VideoFrameBodyItemMetadataItem("adultScore", rand.NextDouble().ToString()),
- new VideoFrameBodyItemMetadataItem("a", "false"),
- new VideoFrameBodyItemMetadataItem("racyScore", rand.NextDouble().ToString()),
- new VideoFrameBodyItemMetadataItem("r", "false")
- },
-
- ReviewerResultTags = new List<VideoFrameBodyItemReviewerResultTagsItem>()
- {
- new VideoFrameBodyItemReviewerResultTagsItem("tag1", "value1")
- }
- };
-
- return frame;
-}
-```
-
-```csharp
-/// <summary>
-/// Add a video frame to the indicated video review. For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b76ae7151f0b10d451fd
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-/// <param name="url">The URL of the video frame image.</param>
-static void AddFrame(ContentModeratorClient client, string review_id, string url, string timestamp_seconds)
-{
- Console.WriteLine("Adding a frame to the review with ID {0}.", review_id);
-
- var frames = new List<VideoFrameBodyItem>()
- {
- CreateFrameToAddToReview(url, timestamp_seconds)
- };
-
- client.Reviews.AddVideoFrameUrl("application/json", TeamName, review_id, frames);
-
- Thread.Sleep(throttleRate);
-```
-
-## Get video frames for video review
-
-You can get the video frames for a video review with **ContentModeratorClient.Reviews.GetVideoFrames**. **GetVideoFrames** has the following required parameters:
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-1. The zero-based index of the first video frame to get.
-1. The number of video frames to get.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Get the video frames assigned to the indicated video review. For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7ba43e7151f0b10d45200
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-static void GetFrames(ContentModeratorClient client, string review_id)
-{
- Console.WriteLine("Getting frames for the review with ID {0}.", review_id);
-
- Frames result = client.Reviews.GetVideoFrames(TeamName, review_id, 0);
- Console.WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- Thread.Sleep(throttleRate);
-}
-```
-
-## Get video review information
-
-You get information for a video review with **ContentModeratorClient.Reviews.GetReview**. **GetReview** has the following required parameters:
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Get the information for the indicated video review. For more information, see the reference API:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c2
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-private static void GetReview(ContentModeratorClient client, string review_id)
-{
- Console.WriteLine("Getting the status for the review with ID {0}.", review_id);
-
- var result = client.Reviews.GetReview(ModeratorHelper.Clients.TeamName, review_id);
- Console.WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- Thread.Sleep(throttleRate);
-}
-```
-
-## Publish video review
-
-You publish a video review with **ContentModeratorClient.Reviews.PublishVideoReview**. **PublishVideoReview** has the following required parameters:
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Publish the indicated video review. For more information, see the reference API:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7bb29e7151f0b10d45201
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-private static void PublishReview(ContentModeratorClient client, string review_id)
-{
- Console.WriteLine("Publishing the review with ID {0}.", review_id);
- client.Reviews.PublishVideoReview(TeamName, review_id);
- Thread.Sleep(throttleRate);
-}
-```
-
-## Putting it all together
-
-Add the **Main** method definition to namespace VideoReviews, class Program. Finally, close the Program class and the VideoReviews namespace.
-
-```csharp
-static void Main(string[] args)
-{
- using (ContentModeratorClient client = NewClient())
- {
- // Create a review with the content pointing to a streaming endpoint (manifest)
- var streamingcontent = "https://amssamples.streaming.mediaservices.windows.net/91492735-c523-432b-ba01-faba6c2206a2/AzureMediaServicesPromo.ism/manifest";
- string review_id = CreateReview(client, "review1", streamingcontent);
-
- var frame1_url = "https://blobthebuilder.blob.core.windows.net/sampleframes/ams-video-frame1-00-17.PNG";
- var frame2_url = "https://blobthebuilder.blob.core.windows.net/sampleframes/ams-video-frame-2-01-04.PNG";
- var frame3_url = "https://blobthebuilder.blob.core.windows.net/sampleframes/ams-video-frame-3-02-24.PNG";
-
- // Add the frames from 17, 64, and 144 seconds.
- AddFrame(client, review_id, frame1_url, "17");
- AddFrame(client, review_id, frame2_url, "64");
- AddFrame(client, review_id, frame3_url, "144");
-
- // Get frames information and show
- GetFrames(client, review_id);
- GetReview(client, review_id);
-
- // Publish the review
- PublishReview(client, review_id);
-
- Console.WriteLine("Open your Content Moderator Dashboard and select Review > Video to see the review.");
- Console.WriteLine("Press any key to close the application.");
- Console.ReadKey();
- }
-}
-```
-
-## Run the program and review the output
-When you run the application, you see an output on the following lines:
-
-```json
-Creating a video review.
-Adding a frame to the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-Adding a frame to the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-Adding a frame to the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-Getting frames for the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-{
- "ReviewId": "201801v3212bda70ced4928b2cd7459c290c7dc",
- "VideoFrames": [
- {
- "Timestamp": "17000",
- "FrameImage": "https://reviewcontentprod.blob.core.windows.net/testreview6/FRM_201801v3212bda70ced4928b2cd7459c290c7dc_17000.PNG",
- "Metadata": [
- {
- "Key": "reviewRecommended",
- "Value": "true"
- },
- {
- "Key": "adultScore",
- "Value": "0.808312381528463"
- },
- {
- "Key": "a",
- "Value": "false"
- },
- {
- "Key": "racyScore",
- "Value": "0.846378884206702"
- },
- {
- "Key": "r",
- "Value": "false"
- }
- ],
- "ReviewerResultTags": [
- {
- "Key": "tag1",
- "Value": "value1"
- }
- ]
- },
- {
- "Timestamp": "64000",
- "FrameImage": "https://reviewcontentprod.blob.core.windows.net/testreview6/FRM_201801v3212bda70ced4928b2cd7459c290c7dc_64000.PNG",
- "Metadata": [
- {
- "Key": "reviewRecommended",
- "Value": "true"
- },
- {
- "Key": "adultScore",
- "Value": "0.576078300166912"
- },
- {
- "Key": "a",
- "Value": "false"
- },
- {
- "Key": "racyScore",
- "Value": "0.244768953064815"
- },
- {
- "Key": "r",
- "Value": "false"
- }
- ],
- "ReviewerResultTags": [
- {
- "Key": "tag1",
- "Value": "value1"
- }
- ]
- },
- {
- "Timestamp": "144000",
- "FrameImage": "https://reviewcontentprod.blob.core.windows.net/testreview6/FRM_201801v3212bda70ced4928b2cd7459c290c7dc_144000.PNG",
- "Metadata": [
- {
- "Key": "reviewRecommended",
- "Value": "true"
- },
- {
- "Key": "adultScore",
- "Value": "0.664480847150311"
- },
- {
- "Key": "a",
- "Value": "false"
- },
- {
- "Key": "racyScore",
- "Value": "0.933817870418456"
- },
- {
- "Key": "r",
- "Value": "false"
- }
- ],
- "ReviewerResultTags": [
- {
- "Key": "tag1",
- "Value": "value1"
- }
- ]
- }
- ]
-}
-
-Getting the status for the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-{
- "ReviewId": "201801v3212bda70ced4928b2cd7459c290c7dc",
- "SubTeam": "public",
- "Status": "UnPublished",
- "ReviewerResultTags": [],
- "CreatedBy": "testreview6",
- "Metadata": [
- {
- "Key": "FrameCount",
- "Value": "3"
- }
- ],
- "Type": "Video",
- "Content": "https://amssamples.streaming.mediaservices.windows.net/91492735-c523-432b-ba01-faba6c2206a2/AzureMediaServicesPromo.ism/manifest",
- "ContentId": "review1",
- "CallbackEndpoint": null
-}
-
-Publishing the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-Open your Content Moderator Dashboard and select Review > Video to see the review.
-Press any key to close the application.
-```
-
-## Check out your video review
-
-Finally, you see the video review in your Content Moderator review tool account on the **Review**>**Video** screen.
-
-![Video review for human moderators](images/ams-video-review.PNG)
-
-## Next steps
-
-Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET.
cognitive-services Video Transcript Reviews Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/video-transcript-reviews-quickstart-dotnet.md
- Title: Create video transcript reviews using .NET - Content Moderator-
-description: Learn how to create video transcript reviews using the Azure Cognitive Services Content Moderator SDK for .NET.
------- Previously updated : 10/24/2019-----
-# Create video transcript reviews using .NET
--
-This article provides information and code samples to help you quickly get started using the [Content Moderator SDK with C#](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to:
--- Create a video review for human moderators-- Add a moderated transcript to the review-- Publish the review-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site if you haven't done so already.-- This article assumes that you have [moderated the video](video-moderation-api.md) and [created the video review](video-reviews-quickstart-dotnet.md) in the review tool for human decision making. You now want to add moderated video transcripts in the review tool.-
-## Ensure your API key can call the review API (Job creation)
-
-After completing the previous steps, you may end up with two Content Moderator keys if you started from the Azure portal.
-
-If you plan to use the Azure-provided API key in your SDK sample, follow the steps mentioned in the [Using Azure key with the review API](./review-tool-user-guide/configure.md#use-your-azure-account-with-the-review-apis) section to allow your application to call the review API and create reviews.
-
-If you use the free trial key generated by the review tool, your review tool account already knows about the key and therefore, no additional steps are required.
-
-## Prepare your video for review
-
-Add the transcript to a video review. The video must be published online. You need its streaming endpoint. The streaming endpoint allows the review tool video player to play the video.
-
-![Video demo thumbnail](images/ams-video-demo-view.PNG)
--- Copy the **URL** on this [Azure Media Services demo](https://aka.ms/azuremediaplayer?url=https%3A%2F%2Famssamples.streaming.mediaservices.windows.net%2F91492735-c523-432b-ba01-faba6c2206a2%2FAzureMediaServicesPromo.ism%2Fmanifest) page for the manifest URL.-
-## Create your Visual Studio project
-
-1. Add a new **Console app (.NET Framework)** project to your solution.
-
-1. Name the project **VideoTranscriptReviews**.
-
-1. Select this project as the single startup project for the solution.
-
-### Install required packages
-
-Install the following NuGet packages for the TermLists project.
--- Microsoft.Azure.CognitiveServices.ContentModerator-- Microsoft.Rest.ClientRuntime-- Microsoft.Rest.ClientRuntime.Azure-- Newtonsoft.Json-
-### Update the program's using statements
-
-Modify the program's using statements as follows.
--
-```csharp
-using System;
-using System.Collections.Generic;
-using System.IO;
-using System.Threading;
-using Microsoft.Azure.CognitiveServices.ContentModerator;
-using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
-using Newtonsoft.Json;
-```
-
-### Add private properties
-
-Add the following private properties to namespace **VideoTranscriptReviews**, class **Program**. Update the `AzureEndpoint` and `CMSubscriptionKey` fields with the values of your endpoint URL and subscription key. You can find these in the **Quick start** tab of your resource in the Azure portal.
-
-```csharp
-namespace VideoReviews
-{
- class Program
- {
- // NOTE: Enter a valid endpoint URL
- /// <summary>
- /// The endpoint URL of your subscription
- /// </summary>
- private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
-
- // NOTE: Enter a valid subscription key.
- /// <summary>
- /// Your Content Moderator subscription key.
- /// </summary>
- private static readonly string CMSubscriptionKey = "YOUR CONTENT MODERATOR KEY";
-
- // NOTE: Replace this example team name with your Content Moderator team name.
- /// <summary>
- /// The name of the team to assign the job to.
- /// </summary>
- /// <remarks>This must be the team name you used to create your
- /// Content Moderator account. You can retrieve your team name from
- /// the Content Moderator web site. Your team name is the Id associated
- /// with your subscription.</remarks>
- private const string TeamName = "YOUR CONTENT MODERATOR TEAM ID";
-
- /// <summary>
- /// The minimum amount of time, in milliseconds, to wait between calls
- /// to the Content Moderator APIs.
- /// </summary>
- private const int throttleRate = 2000;
-```
-
-### Create Content Moderator Client object
-
-Add the following method definition to namespace VideoTranscriptReviews, class Program.
-
-```csharp
-/// <summary>
-/// Returns a new Content Moderator client for your subscription.
-/// </summary>
-/// <returns>The new client.</returns>
-/// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
-/// When you have finished using the client,
-/// you should dispose of it either directly or indirectly. </remarks>
-public static ContentModeratorClient NewClient()
-{
- return new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey))
- {
- Endpoint = AzureEndpoint
- };
-}
-```
-
-## Create a video review
-
-Create a video review with **ContentModeratorClient.Reviews.CreateVideoReviews**. For more information, see the [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4).
-
-**CreateVideoReviews** has the following required parameters:
-1. A string that contains a MIME type, which should be "application/json."
-1. Your Content Moderator team name.
-1. An **IList\<CreateVideoReviewsBodyItem>** object. Each **CreateVideoReviewsBodyItem** object represents a video review. This quickstart creates one review at a time.
-
-**CreateVideoReviewsBodyItem** has several properties. At a minimum, you set the following properties:
-- **Content**. The URL of the video to be reviewed.-- **ContentId**. An ID to assign to the video review.-- **Status**. Set the value to "Unpublished." If you do not set it, it defaults to "Pending", which means the video review is published and pending human review. Once a video review is published, you can no longer add video frames, a transcript, or a transcript moderation result to it.-
-> [!NOTE]
-> **CreateVideoReviews** returns an IList\<string>. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Create a video review. For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="id">The ID to assign to the video review.</param>
-/// <param name="content">The URL of the video to review.</param>
-/// <returns>The ID of the video review.</returns>
-private static string CreateReview(ContentModeratorClient client, string id, string content)
-{
- Console.WriteLine("Creating a video review.");
-
- List<CreateVideoReviewsBodyItem> body = new List<CreateVideoReviewsBodyItem>() {
- new CreateVideoReviewsBodyItem
- {
- Content = content,
- ContentId = id,
- /* Note: to create a published review, set the Status to "Pending".
- However, you cannot add video frames or a transcript to a published review. */
- Status = "Unpublished",
- }
- };
-
- var result = client.Reviews.CreateVideoReviews("application/json", TeamName, body);
-
- Thread.Sleep(throttleRate);
-
- // We created only one review.
- return result[0];
-}
-```
-
-> [!NOTE]
-> Your Content Moderator service key has a requests per second (RPS) rate limit. If you exceed the limit, the SDK throws an exception with a 429 error code.
->
-> A free tier key has a one RPS rate limit.
-
-## Add transcript to video review
-
-You add a transcript to a video review with **ContentModeratorClient.Reviews.AddVideoTranscript**. **AddVideoTranscript** has the following required parameters:
-1. Your Content Moderator team ID.
-1. The video review ID returned by **CreateVideoReviews**.
-1. A **Stream** object that contains the transcript.
-
-The transcript must be in the WebVTT format. For more information, see [WebVTT: The Web Video Text Tracks Format](https://www.w3.org/TR/webvtt1/).
-
-> [!NOTE]
-> The program uses a sample transcript in the VTT format. In a real-world solution, you use the Azure Media Indexer service to [generate a transcript](../../media-services/previous/media-services-index-content.md) from a video.
-
-Add the following method definition to namespace VideotranscriptReviews, class Program.
-
-```csharp
-/// <summary>
-/// Add a transcript to the indicated video review.
-/// The transcript must be in the WebVTT format.
-/// For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b8b2e7151f0b10d451fe
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-/// <param name="transcript">The video transcript.</param>
-static void AddTranscript(ContentModeratorClient client, string review_id, string transcript)
-{
- Console.WriteLine("Adding a transcript to the review with ID {0}.", review_id);
- client.Reviews.AddVideoTranscript(TeamName, review_id, new MemoryStream(System.Text.Encoding.UTF8.GetBytes(transcript)));
- Thread.Sleep(throttleRate);
-}
-```
-
-## Add a transcript moderation result to video review
-
-In addition to adding a transcript to a video review, you also add the result of moderating that transcript. You do so with **ContentModeratorClient.Reviews.AddVideoTranscriptModerationResult**. For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b93ce7151f0b10d451ff).
-
-**AddVideoTranscriptModerationResult** has the following required parameters:
-1. A string that contains a MIME type, which should be "application/json."
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-1. An IList\<TranscriptModerationBodyItem>. A **TranscriptModerationBodyItem** has the following properties:
-1. **Terms**. An IList\<TranscriptModerationBodyItemTermsItem>. A **TranscriptModerationBodyItemTermsItem** has the following properties:
-1. **Index**. The zero-based index of the term.
-1. **Term**. A string that contains the term.
-1. **Timestamp**. A string that contains, in seconds, the time in the transcript at which the terms are found.
-
-The transcript must be in the WebVTT format. For more information, see [WebVTT: The Web Video Text Tracks Format](https://www.w3.org/TR/webvtt1/).
-
-Add the following method definition to namespace VideoTranscriptReviews, class Program. This method submits a transcript to the **ContentModeratorClient.TextModeration.ScreenText** method. It also translates the result into an IList\<TranscriptModerationBodyItem>, and submits to **AddVideoTranscriptModerationResult**.
-
-```csharp
-/// <summary>
-/// Add the results of moderating a video transcript to the indicated video review.
-/// For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b93ce7151f0b10d451ff
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-/// <param name="transcript">The video transcript.</param>
-static void AddTranscriptModerationResult(ContentModeratorClient client, string review_id, string transcript)
-{
- Console.WriteLine("Adding a transcript moderation result to the review with ID {0}.", review_id);
-
- // Screen the transcript using the Text Moderation API. For more information, see:
- // https://westus2.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f
- Screen screen = client.TextModeration.ScreenText("eng", "text/plain", transcript);
-
- // Map the term list returned by ScreenText into a term list we can pass to AddVideoTranscriptModerationResult.
- List<TranscriptModerationBodyItemTermsItem> terms = new List<TranscriptModerationBodyItemTermsItem>();
- if (null != screen.Terms)
- {
- foreach (var term in screen.Terms)
- {
- if (term.Index.HasValue)
- {
- terms.Add(new TranscriptModerationBodyItemTermsItem(term.Index.Value, term.Term));
- }
- }
- }
-
- List<TranscriptModerationBodyItem> body = new List<TranscriptModerationBodyItem>()
- {
- new TranscriptModerationBodyItem()
- {
- Timestamp = "0",
- Terms = terms
- }
- };
-
- client.Reviews.AddVideoTranscriptModerationResult("application/json", TeamName, review_id, body);
-
- Thread.Sleep(throttleRate);
-}
-```
-
-## Publish video review
-
-You publish a video review with **ContentModeratorClient.Reviews.PublishVideoReview**. **PublishVideoReview** has the following required parameters:
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Publish the indicated video review. For more information, see the reference API:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7bb29e7151f0b10d45201
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-private static void PublishReview(ContentModeratorClient client, string review_id)
-{
- Console.WriteLine("Publishing the review with ID {0}.", review_id);
- client.Reviews.PublishVideoReview(TeamName, review_id);
- Thread.Sleep(throttleRate);
-}
-```
-
-## Putting it all together
-
-Add the **Main** method definition to namespace VideoTranscriptReviews, class Program. Finally, close the Program class and the VideoTranscriptReviews namespace.
-
-> [!NOTE]
-> The program uses a sample transcript in the VTT format. In a real-world solution, you use the Azure Media Indexer service to [generate a transcript](../../media-services/previous/media-services-index-content.md) from a video.
-
-```csharp
-static void Main(string[] args)
-{
- using (ContentModeratorClient client = NewClient())
- {
- // Create a review with the content pointing to a streaming endpoint (manifest)
- var streamingcontent = "https://amssamples.streaming.mediaservices.windows.net/91492735-c523-432b-ba01-faba6c2206a2/AzureMediaServicesPromo.ism/manifest";
- string review_id = CreateReview(client, "review1", streamingcontent);
-
- var transcript = @"WEBVTT
-
- 01:01.000 --> 02:02.000
- First line with a negative word in a transcript.
-
- 02:03.000 --> 02:25.000
- This is another line in the transcript.
- ";
-
- AddTranscript(client, review_id, transcript);
-
- AddTranscriptModerationResult(client, review_id, transcript);
-
- // Publish the review
- PublishReview(client, review_id);
-
- Console.WriteLine("Open your Content Moderator Dashboard and select Review > Video to see the review.");
- Console.WriteLine("Press any key to close the application.");
- Console.ReadKey();
- }
-}
-```
-
-## Run the program and review the output
-
-When you run the application, you see an output on the following lines:
-
-```console
-Creating a video review.
-Adding a transcript to the review with ID 201801v5b08eefa0d2d4d64a1942aec7f5cacc3.
-Adding a transcript moderation result to the review with ID 201801v5b08eefa0d2d4d64a1942aec7f5cacc3.
-Publishing the review with ID 201801v5b08eefa0d2d4d64a1942aec7f5cacc3.
-Open your Content Moderator Dashboard and select Review > Video to see the review.
-Press any key to close the application.
-```
-
-## Navigate to your video transcript review
-
-Go to the video transcript review in your Content Moderator review tool on the **Review**>**Video**>**Transcript** screen.
-
-You see the following features:
-- The two lines of transcript you added-- The profanity term found and highlighted by the text moderation service-- Selecting a transcription text starts the video from that timestamp-
-![Video transcript review for human moderators](images/ams-video-transcript-review.PNG)
-
-## Next steps
-
-Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET.
-
-Learn how to generate [video reviews](video-reviews-quickstart-dotnet.md) in the review tool.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/whats-new.md
- Title: What's new in Content Moderator?-
-description: This article contains news about Content Moderator.
------ Previously updated : 09/28/2021---
-# What's new in Content Moderator
--
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
-
-## August 2020
-
-### New video moderation UI
-
-* The Review tool's video moderation page has been updated with many new tools and features to facilitate tagging and processing videos and parts of videos. See the [Video moderation guide](./video-moderation-human-review.md) to learn more.
-
-### New dashboard
-
-* The Review tool dashboard has been updated to show more information about open, pending, and completed reviews. See the [Review tool guide](./Review-Tool-User-Guide/human-in-the-loop.md) for a tour of the new features.
-
-## Cognitive Services updates
-
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Data Sources And Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/data-sources-and-content.md
Title: Data sources and content types - QnA Maker description: Learn how to import question and answer pairs from data sources and supported content types, which include many standard structured documents such as PDF, DOCX, and TXT - QnA Maker. ++ Previously updated : 10/13/2020 Last updated : 01/11/2022 # Importing from data sources A knowledge base consists of question and answer pairs brought in by public URLs and files. + ## Data source locations Content is brought into a knowledge base from a data source. Data source locations are **public URLs or files**, which do not require authentication.
cognitive-services Improve Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/improve-knowledge-base.md
description: Improve the quality of your knowledge base with active learning. Re
Previously updated : 04/06/2020 Last updated : 01/11/2022++ ms.devlang: csharp, javascript
<a name="accept-an-active-learning-suggestion-in-the-knowledge-base"></a>
-Active Learning alters the Knowledge Base or Search Service after you approve the suggestion, then save and train. If you approve the suggestion it will be added as an alternate question.
+Active Learning alters the Knowledge Base or Search Service after you approve the suggestion, then save and train. If you approve the suggestion, it will be added as an alternate question.
## Turn on active learning
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
The following sections list changes in the most recent releases.
## Speech Devices SDK 0.5.0: 2018-Aug release - Improved the accuracy of speech recognition by fixing a bug in the audio processing code.-- Updated the [Speech SDK](./speech-sdk.md) component to version 0.5.0. For more information, see its
- [release notes](releasenotes.md#cognitive-services-speech-sdk-050-2018-july-release).
+- Updated the [Speech SDK](./speech-sdk.md) component to version 0.5.0.
## Speech Devices SDK 0.2.12733: 2018-May release
cognitive-services How To Lower Speech Synthesis Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-lower-speech-synthesis-latency.md
See [How to configure OpenSSL for Linux](how-to-configure-openssl-linux.md#certi
### Use latest Speech SDK
-We keep improving the Speech SDK's performance, so try to use the latest Speech SDK in your application.
-For example, we fixed a `TCP_NODELAY` setting issue in [1.16.0](releasenotes.md#speech-sdk-1160-2021-march-release), which reduced extra one route trip time.
+We keep improving the Speech SDK's performance, so try to use the latest Speech SDK in your application.
## Load test guideline
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
Previously updated : 05/15/2021- Last updated : 01/11/2022+ # Speech Service release notes
+See below for information about changes to Speech services and resources.
-## OnPrem Speech 2022-Jan release
+## What's new?
-### Speech-to-text Container v2.18.0
-- Regular monthly updates (including security upgrades and vulnerability fixes).
+* Speaker Recognition service is generally available (GA). With [Speaker Recognition](./speaker-recognition-overview.md) you can accurately verify and identify speakers by their unique voice characteristics.
+* Speech SDK 1.19.0 release including Speaker Recognition support, Mac M1 ARM support, OpenSSL linking in Linux is dynamic, and Ubuntu 16.04 is no longer supported.
+* Custom Neural Voice extended to support [49 locales](./language-support.md#custom-neural-voice).
+* Commitment Tiers added to [pricing options](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-### Neural-text-to-speech Container v1.12.0
-- Support new locale-voice `am-et-amehaneural` and `am-et-mekdesneural` and `so-so-muuseneural` and `so-so-ubaxneural`.-- Regular monthly updates (including security upgrades and vulnerability fixes).
+## Release notes
-## Speech SDK 1.19.0: 2021-Nov release
+**Choose a service or resource**
-
+# [SDK](#tab/speech-sdk)
-**Note**: Get started with the Speech SDK [here](speech-sdk.md#get-the-speech-sdk). The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-#### Highlights
+# [CLI](#tab/speech-cli)
-- Speaker Recognition service is generally available (GA) now. Speech SDK APIs are available on C++, C#, Java, and JavaScript. With Speaker recognition you can accurately verify and identify speakers by their unique voice characteristics. See the [documentation](speaker-recognition-overview.md) for more details. -- We have dropped support for Ubuntu 16.04 in conjunction with Azure DevOps and GitHub. Ubuntu 16.04 reached end of life back in April of 2021. Please migrate Ubuntu 16.04 workflows to Ubuntu 18.04 or newer.ΓÇ»
+# [Text-to-speech](#tab/text-to-speech)
-- OpenSSL linking in Linux binaries changed to dynamic. Linux binary size has been reduced by about 50%. -- Mac M1 ARM based silicon support added.
+# [Speech-to-text](#tab/speech-to-text)
-
-#### New features
+# [Containers](#tab/containers)
-- **C++/C#/Java**: New APIs added to enable audio processing support for speech input with Microsoft Audio Stack. Documentation [here](audio-processing-overview.md). -- **C++**: New APIs for intent recognition to facilitate more advanced pattern matching. This includes List and Prebuilt Integer entities as well as support for grouping intents and entities as models (Documentation, updates, and samples are under development and will be published in the near future). --- **Mac**: Support for ARM64 (M1) based silicon for Cocoapod, Python, Java, and NuGet packages related to [GitHub issue 1244](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1244).--- **iOS/Mac**: iOS and macOS binaries are now packaged into xcframework related to [GitHub issue 919](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/919).--- **iOS/Mac**: Support for Mac catalyst related to [GitHub issue 1171](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1171). --- **Linux**: New tar package added for CentOS7 [About the Speech SDK](speech-sdk.md).--- **JavaScript**: VoiceProfile & SpeakerRecognizer APIs made async/awaitable. --- **JavaScript**: Support added for US government Azure regions. --- **Windows**: Support added for playback on Universal Windows Platform (UWP). -
-
-
-#### Bug fixes
--- **Android**: OpenSSL security update (updated to version 1.1.1l) for Android packages. --- **Python**: Resolved bug where selecting speaker device on Python fails. --- **Core**: Automatically reconnect when a connection attempt fails. --- **iOS**: Audio compression disabled on iOS packages due instability and bitcode build problems when using GStreamer. Details are available via [GitHub issue 1209](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1209).-
-
-
-#### Samples [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
--- **Mac/iOS**: Updated samples and quickstarts to use xcframework package. --- **.NET**: Samples updated to use .NET core 3.1 version. --- **JavaScript**: Added sample for voice assistants. -
-
-## Text-to-Speech 2021-October release
-**New languages and voices added for Neural TTS**
--- **49 new languages and variants introduced** - 98 new voices in 49 new locales are added into the neural TTS languages list: Adri in `af-ZA` Afrikaans (South Africa), Willem in `af-ZA` Afrikaans (South Africa), Mekdes in `am-ET` Amharic (Ethiopia), Ameha in `am-ET` Amharic (Ethiopia), Fatima in `ar-AE` Arabic (United Arab Emirates), Hamdan in `ar-AE` Arabic (United Arab Emirates), Laila in `ar-BH` Arabic (Bahrain), Ali in `ar-BH` Arabic (Bahrain), Amina in `ar-DZ` Arabic (Algeria), Ismael in `ar-DZ` Arabic (Algeria), Rana in `ar-IQ` Arabic (Iraq), Bassel in `ar-IQ` Arabic (Iraq), Sana  in `ar-JO` Arabic (Jordan), Taim  in `ar-JO` Arabic (Jordan), Noura in `ar-KW` Arabic (Kuwait), Fahed in `ar-KW` Arabic (Kuwait), Iman in `ar-LY` Arabic (Libya), Omar in `ar-LY` Arabic (Libya), Mouna in `ar-MA` Arabic (Morocco), Jamal in `ar-MA` Arabic (Morocco), Amal in `ar-QA` Arabic (Qatar), Moaz in `ar-QA` Arabic (Qatar), Amany in `ar-SY` Arabic (Syria), Laith in `ar-SY` Arabic (Syria), Reem in `ar-TN` Arabic (Tunisia), Hedi in `ar-TN` Arabic (Tunisia), Maryam in `ar-YE` Arabic (Yemen), Saleh in `ar-YE` Arabic (Yemen), Nabanita in `bn-BD` Bangla (Bangladesh), Pradeep in `bn-BD` Bangla (Bangladesh), Asilia in `en-KE` English (Kenya), Chilemba in `en-KE` English (Kenya), Ezinne in `en-NG` English (Nigeria), Abeo in `en-NG` English (Nigeria), Imani in `en-TZ` English (Tanzania), Elimu in `en-TZ` English (Tanzania), Sofia in `es-BO` Spanish (Bolivia), Marcelo in `es-BO` Spanish (Bolivia), Catalina in `es-CL` Spanish (Chile), Lorenzo in `es-CL` Spanish (Chile), Maria in `es-CR` Spanish (Costa Rica), Juan in `es-CR` Spanish (Costa Rica), Belkys in `es-CU` Spanish (Cuba), Manuel in `es-CU` Spanish (Cuba), Ramona in `es-DO` Spanish (Dominican Republic), Emilio in `es-DO` Spanish (Dominican Republic), Andrea in `es-EC` Spanish (Ecuador), Luis in `es-EC` Spanish (Ecuador), Teresa in `es-GQ` Spanish (Equatorial Guinea), Javier in `es-GQ` Spanish (Equatorial Guinea), Marta in `es-GT` Spanish (Guatemala), Andres in `es-GT` Spanish (Guatemala), Karla in `es-HN` Spanish (Honduras), Carlos in `es-HN` Spanish (Honduras), Yolanda in `es-NI` Spanish (Nicaragua), Federico in `es-NI` Spanish (Nicaragua), Margarita in `es-PA` Spanish (Panama), Roberto in `es-PA` Spanish (Panama), Camila in `es-PE` Spanish (Peru), Alex in `es-PE` Spanish (Peru), Karina  in `es-PR` Spanish (Puerto Rico), Victor  in `es-PR` Spanish (Puerto Rico), Tania in `es-PY` Spanish (Paraguay), Mario in `es-PY` Spanish (Paraguay), Lorena in `es-SV` Spanish (El Salvador), Rodrigo in `es-SV` Spanish (El Salvador), Valentina in `es-UY` Spanish (Uruguay), Mateo in `es-UY` Spanish (Uruguay), Paola in `es-VE` Spanish (Venezuela), Sebastian in `es-VE` Spanish (Venezuela), Dilara in `fa-IR` Persian (Iran), Farid in `fa-IR` Persian (Iran), Blessica in `fil-PH` Filipino (Philippines), Angelo in `fil-PH` Filipino (Philippines), Sabela in `gl-ES` Galician (Spain), Roi in `gl-ES` Galician (Spain), Siti in `jv-ID` Javanese (Indonesia), Dimas in `jv-ID` Javanese (Indonesia), Sreymom in `km-KH` Khmer (Cambodia), Piseth in `km-KH` Khmer (Cambodia), Nilar in `my-MM` Burmese (Myanmar), Thiha in `my-MM` Burmese (Myanmar), Ubax in `so-SO` Somali (Somalia), Muuse in `so-SO` Somali (Somalia), Tuti in `su-ID` Sundanese (Indonesia), Jajang in `su-ID` Sundanese (Indonesia), Rehema in `sw-TZ` Swahili (Tanzania), Daudi in `sw-TZ` Swahili (Tanzania), Saranya in `ta-LK` Tamil (Sri Lanka), Kumar in `ta-LK` Tamil (Sri Lanka), Venba in `ta-SG` Tamil (Singapore), Anbu in `ta-SG` Tamil (Singapore), Gul in `ur-IN` Urdu (India), Salman in `ur-IN` Urdu (India), Madina in `uz-UZ` Uzbek (Uzbekistan), Sardor in `uz-UZ` Uzbek (Uzbekistan), Thando in `zu-ZA` Zulu (South Africa), Themba in `zu-ZA` Zulu (South Africa).-
-## Text-to-Speech 2021-September release
-- **New chatbot voice in `en-US` English (US)**: Sara, represents a young female adult that talks more casually and fits best for the chatbot scenarios. -- **New styles added for `ja-JP` Japanese voice Nanami**: Three new styles are now available with Nanami: chat, customer service, and cheerful.-- **Overall pronunciation improvement**: Ardi in `id-ID`, Premwadee in `th-TH`, Christel in `da-DK`, HoaiMy and NamMinh in `vi-VN`.-- **Two new voices in `zh-CN` Chinese (Mandarin, China) in preview**: Xiaochen & Xiaoyan, optimized for spontaneous speech and customer service scenarios.-
-## Text-to-Speech 2021-July release
-
-**Neural TTS updates**
-- Reduced pronunciation errors in Hebrew by 20%.-
-**Speech Studio updates**
-- **Custom Neural Voice**: Updated the training pipeline to UniTTSv3 with which the model quality is improved while training time is reduced by 50% for acoustic models. -- **Audio Content Creation**: Fixed the "Export" performance issue and the bug on custom neural voice selection. -
-## Speech SDK 1.18.0: 2021-July release
-
-**Note**: Get started with the Speech SDK [here](speech-sdk.md#get-the-speech-sdk).
-
-**Highlights summary**
-- Ubuntu 16.04 reached end of life in April of 2021. In conjunction with Azure DevOps and GitHub, we will drop support for 16.04 in September 2021. Please migrate ubuntu-16.04 workflows to ubuntu-18.04 or newer before then. -
-#### New features
--- **C++**: Simple Language Pattern matching with the Intent Recognizer now makes it easier to [implement simple intent recognition scenarios](./get-started-intent-recognition.md?pivots=programming-language-cpp).-- **C++/C#/Java**: We added a new API, `GetActivationPhrasesAsync()` to the `VoiceProfileClient` class for receiving a list of valid activation phrases in speaker recognition enrollment phase for independent recognition scenarios.
- - **Important**: The Speaker Recognition feature is in Preview. All voice profiles created in Preview will be discontinued 90 days after the Speaker Recognition feature is moved out of Preview into General Availability. At that point the Preview voice profiles will stop functioning.
-- **Python**: Added [support for continuous Language Identification (LID)](./how-to-automatic-language-detection.md?pivots=programming-language-python) on the existing `SpeechRecognizer` and `TranslationRecognizer` objects. -- **Python**: Added a [new Python object](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.sourcelanguagerecognizer) named `SourceLanguageRecognizer` to do one-time or continuous LID (without recognition or translation). -- **JavaScript**: `getActivationPhrasesAsync` API added to `VoiceProfileClient` class for receiving a list of valid activation phrases in speaker recognition enrollment phase for independent recognition scenarios. -- **JavaScript** `VoiceProfileClient`'s `enrollProfileAsync` API is now async awaitable. See [this independent identification code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/node/speaker-recognition/identification/independent-identification.js) for example usage.-
-#### Improvements
--- **Java**: **AutoCloseable** support added to many Java objects. Now the try-with-resources model is supported to release resources. See [this sample that uses try-with-resources](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java#L28). Also see the Oracle Java documentation tutorial for [The try-with-resources Statement](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) to learn about this pattern.-- **Disk footprint** has been significantly reduced for many platforms and architectures. Examples for the `Microsoft.CognitiveServices.Speech.core` binary: x64 Linux is 475KB smaller (8.0% reduction); ARM64 Windows UWP is 464KB smaller (11.5% reduction); x86 Windows is 343KB smaller (17.5% reduction); and x64 Windows is 451KB smaller (19.4% reduction).-
-#### Bug fixes
--- **Java**: Fixed synthesis error when the synthesis text contains surrogate characters. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1118). -- **JavaScript**: Browser microphone audio processing now uses `AudioWorkletNode` instead of deprecated `ScriptProcessorNode`. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/391).-- **JavaScript**: Correctly keep conversations alive during long running conversation translation scenarios. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/389).-- **JavaScript**: Fixed issue with recognizer reconnecting to a mediastream in continuous recognition. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/385).-- **JavaScript**: Fixed issue with recognizer reconnecting to a pushStream in continuous recognition. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/pull/399).-- **JavaScript**: Corrected word level offset calculation in detailed recognition results. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/394).-
-#### Samples
--- Java quickstart samples updated [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java).-- JavaScript speaker recognition samples updated to show new usage of `enrollProfileAsync()`. See samples [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/node).-
-## Text-to-Speech 2021-June release
-
-**Speech Studio updates**
--- **Custom Neural Voice**: Custom Neural Voice training extended to support South East Asia. New features released to support data uploading status checking. -- **Audio Content Creation**: Released a new feature to support custom lexicon. With this feature, users can easily create their lexicon files and define the customized pronunciation for their audio output. -
-## Text-to-Speech 2021-May release
-
-**New languages and voices added for neural TTS**
--- **Ten new languages introduced** - 20 new voices in 10 new locales are added into the neural TTS language list: Yan in `en-HK` English (Hongkong), Sam in `en-HK` English (Hongkong), Molly in `en-NZ` English (New Zealand), Mitchell in `en-NZ` English (New Zealand), Luna in `en-SG` English (Singapore), Wayne in `en-SG` English (Singapore), Leah in `en-ZA` English (South Africa), Luke in `en-ZA` English (South Africa), Dhwani in `gu-IN` Gujarati (India), Niranjan in `gu-IN` Gujarati (India), Aarohi in `mr-IN` Marathi (India), Manohar in `mr-IN` Marathi (India), Elena in `es-AR` Spanish (Argentina), Tomas in `es-AR` Spanish (Argentina), Salome in `es-CO` Spanish (Colombia), Gonzalo in `es-CO` Spanish (Colombia), Paloma in `es-US` Spanish (US), Alonso in `es-US` Spanish (US), Zuri in `sw-KE` Swahili (Kenya), Rafiki in `sw-KE` Swahili (Kenya).--- **Eleven new en-US voices in preview** - 11 new en-US voices in preview are added to American English, they are Ashley, Amber, Ana, Brandon, Christopher, Cora, Elizabeth, Eric, Michelle, Monica, Jacob.--- **Five `zh-CN` Chinese (Mandarin, Simplified) voices are generally available** - 5 Chinese (Mandarin, Simplified) voices are changed from preview to generally available. They are Yunxi, Xiaomo, Xiaoman, Xiaoxuan, Xiaorui. Now, these voices are available in all [regions](regions.md#prebuilt-neural-voices). Yunxi is added with a new 'assistant' style, which is suitable for chat bot and voice agent. Xiaomo's voice styles are refined to be more natural and featured.-
-## Speech SDK 1.17.0: 2021-May release
-
->[!NOTE]
->Get started with the Speech SDK [here](speech-sdk.md#get-the-speech-sdk).
-
-**Highlights summary**
--- Smaller footprint - we continue to decrease the memory and disk footprint of the Speech SDK and its components.-- A new stand alone language identification API allows you to recognize what language is being spoken.-- Develop speech enabled mixed reality and gaming applications using Unity on macOS.-- You can now use Text-to-Speech in addition to speech recognition from the Go programming language.-- Several Bug fixes to address issues YOU, our valued customers, have flagged on GitHub! THANK YOU! Keep the feedback coming!-
-#### New features
--- **C++/C#**: New stand-alone Single-Shot/At-Start and Continuous Language Detection via the `SourceLanguageRecognizer` API. If you only want to detect the language(s) spoken in audio content, this is the API to do that. See details for [C++](/cpp/cognitive-services/speech/sourcelanguagerecognizer) and [C#](/dotnet/api/microsoft.cognitiveservices.speech.sourcelanguagerecognizer).-- **C++/C#**: Speech Recognition and Translation Recognition now support both at-start and continuous Language Identification so you can programmatically determine which language(s) are being spoken before they are transcribed or translated. See documentation [here for Speech Recognition](how-to-automatic-language-detection.md) and [here for Speech Translation](get-started-speech-translation.md).-- **C#**: Added support Unity support to macOS (x64). This unlocks speech recognition and speech synthesis use cases in mixed reality and gaming!-- **Go**: We added support for speech synthesis/Text-to-Speech to the Go programming language to make speech synthesis available in even more use cases. See our [quickstart](get-started-text-to-speech.md?tabs=windowsinstall&pivots=programming-language-go) or our [reference documentation](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go).-- **C++/C#/Jav).-- **C++/C#/Java/Python/Objective-C/Go**: We now expose the latency and underrun time in `SpeechSynthesisResult` to help you monitor and diagnose speech synthesis latency issues. See details for [C++](/cpp/cognitive-services/speech/speechsynthesisresult), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisresult), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisresult), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisresult), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesisresult) and [Go](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go#readme-reference).-- **C++/C#/Jav#prebuilt-neural-voices) to change the default.-- **C++/C#/Java/Python/Objective-C/Go**: We added a Gender property to the synthesis voice info to make it easier to select voices based on gender. This addresses [GitHub issue #1055](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1055).-- **C++, C#, Java, JavaScript**: We now support `retrieveEnrollmentResultAsync`, `getAuthorizationPhrasesAsync` and `getAllProfilesAsync()` in Speaker Recognition to ease user management of all voice profiles for a given account. See documentation for [C++](/cpp/cognitive-services/speech/speaker-voiceprofileclient), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speaker.voiceprofileclient), [Java](/java/api/com.microsoft.cognitiveservices.speech.voiceprofileclient), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient). This addresses [GitHub issue #338](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/338).-- **JavaScript**: We added retry for connection failures that will make your JavaScript based speech applications more robust.-
-#### Improvements
--- Linux and Android Speech SDK binaries have been updated to use the latest version of OpenSSL (1.1.1k)-- Code Size improvements:
- - Language Understanding is now split into a separate "lu" library.
- - Windows x64 core binary size decreased by 14.4%.
- - Android ARM64 core binary size decreased by 13.7%.
- - other components also decreased in size.
-
-#### Bug fixes
--- **All**: Fixed [GitHub issue #842](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/842) for ServiceTimeout. You can now transcribe very long audio files using the Speech SDK without the connection to the service terminating with this error. However, we still recommend you use [batch transcription](batch-transcription.md) for long files.-- **C#**: Fixed [GitHub issue #947](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/947) where no speech input could leave your app in a bad state.-- **Java**: Fixed [GitHub Issue #997](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/997) where the Java Speech SDK 1.16 crashes when using DialogServiceConnector without a network connection or an invalid subscription key.-- Fixed a crash when abruptly stopping speech recognition (e.g. using CTRL+C on console app).-- **Java**: Added a fix to delete temporary files on Windows when using Java Speech SDK.-- **Java**: Fixed [GitHub issue #994](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/994) where calling `DialogServiceConnector.stopListeningAsync` could result in an error.-- **Java**: Fixed a customer issue in the [virtual assistant quickstart](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/jre/virtual-assistant).-- **JavaScript**: Fixed [GitHub issue #366](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/366) where `ConversationTranslator` threw an error 'this.cancelSpeech is not a function'.-- **JavaScript**: Fixed [GitHub issue #298](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/298) where 'Get result as an in-memory stream' sample played sound out loud.-- **JavaScript**: Fixed [GitHub issue #350](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/350) where calling `AudioConfig` could result in a 'ReferenceError: MediaStream is not defined'.-- **JavaScript**: Fixed an UnhandledPromiseRejection warning in Node.js for long-running sessions.-
-#### Samples
--- Updated Unity samples documentation for macOS [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk).-- A React Native sample for the Cognitive Services speech recognition service is now available [here](https://github.com/microsoft/cognitive-services-sdk-react-native-example).-
-## Speech CLI (also known as SPX): 2021-May release
-
->[!NOTE]
->Get started with the Azure Speech service command line interface (CLI) [here](spx-basics.md). The CLI enables you to use the Azure Speech service without writing any code.
-
-#### New features
--- SPX now supports Profile, Speaker ID and Speaker verification - Try `spx profile` and `spx speaker` from the SPX command line.-- We also added Dialog support - Try `spx dialog` from the SPX command line.-- SPX help improvements. Please give us feedback about how this works for you by opening a [GitHub issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).-- We've decreased the size of the SPX .NET tool install.-
-**COVID-19 abridged testing**:
-
-As the ongoing pandemic continues to require our engineers to work from home, pre-pandemic manual verification scripts have been significantly reduced. We test on fewer devices with fewer configurations, and the likelihood of environment-specific bugs slipping through may be increased. We still rigorously validate with a large set of automation. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Text-to-Speech 2021-April release
-
-**Neural TTS is available across 21 regions**
--- **Twelve new regions added** - Neural TTS is now available in these new 12 regions: `Japan East`, `Japan West`, `Korea Central`, `North Central US`, `North Europe`, `South Central US`, `Southeast Asia`, `UK South`, `west Central US`, `West Europe`, `West US`, `West US 2`. Check [here](regions.md#text-to-speech) for full list of 21 supported regions.-
-## Text-to-Speech 2021-March release
-
-**New languages and voices added for neural TTS**
--- **Six new languages introduced** - 12 new voices in 6 new locales are added into the neural TTS language list: Nia in `cy-GB` Welsh (United Kingdom), Aled in `cy-GB` Welsh (United Kingdom), Rosa in `en-PH` English (Philippines), James in `en-PH` English (Philippines), Charline in `fr-BE` French (Belgium), Gerard in `fr-BE` French (Belgium), Dena in `nl-BE` Dutch (Belgium), Arnaud in `nl-BE` Dutch (Belgium), Polina in `uk-UA` Ukrainian (Ukraine), Ostap in `uk-UA` Ukrainian (Ukraine), Uzma in `ur-PK` Urdu (Pakistan), Asad in `ur-PK` Urdu (Pakistan).--- **Five languages from preview to GA** - 10 voices in 5 locales introduced in 2020-November now are GA: Kert in `et-EE` Estonian (Estonia), Colm in `ga-IE` Irish (Ireland), Nils in `lv-LV` Latvian (Latvia), Leonas in `lt-LT` Lithuanian (Lithuania), Joseph in `mt-MT` Maltese (Malta).--- **New male voice added for French (Canada)** - A new voice Antoine is available for `fr-CA` French (Canada).--- **Quality improvement** - Pronunciation error rate reduction on `hu-HU` Hungarian - 48.17%, `nb-NO` Norwegian - 52.76%, `nl-NL` Dutch (Netherlands) - 22.11%.-
-With this release, we now support a total of 142 neural voices across 60 languages/locales. In addition, over 70 standard voices are available in 49 languages/locales. Visit [Language support](language-support.md#text-to-speech) for the full list.
-
-**Get facial pose events to animate characters**
-
-Neural Text-to-Speech now includes the [viseme event](how-to-speech-synthesis-viseme.md). Viseme events allow users to get a sequence of facial poses along with synthesized speech. Visemes can be used to control the movement of 2D and 3D avatar models, matching mouth movements to synthesized speech. Viseme events are only available for `en-US-AriaNeural` voice at this time.
-
-**Add the bookmark element in Speech Synthesis Markup Language (SSML)**
-
-The [bookmark element](speech-synthesis-markup.md#bookmark-element) allows you to insert custom markers in SSML to get the offset of each marker in the audio stream. It can be used to reference a specific location in the text or tag sequence.
-
-## Speech SDK 1.16.0: 2021-March release
-
-> [!NOTE]
-> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-#### New features
--- **C++/C#/Jav).-- **C++/C#/Java/Objective-C/Python**: Added support for decoding compressed TTS/synthesized audio to the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.propertyid), [Java](/java/api/com.microsoft.cognitiveservices.speech.propertyid), [Objective-C](/objectivec/cognitive-services/speech/spxpropertyid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid).-- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig#fromWavFileInput_File_). This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/252).-- **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices. Details for [C++](/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer#methods), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer#methods), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoiceasync), and [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#methods).-- **C++/C#/Jav).-- **C++/C#/Jav#bookmark-element).-- **Java**: Added support for speaker recognition APIs. Details [here](/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added two new output audio formats with WebM container for TTS (Webm16Khz16BitMonoOpus and Webm24Khz16BitMonoOpus). These are better formats for streaming audio with the Opus codec. Details for [C++](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisoutputformat), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisoutputformat), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisoutputformat), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesisoutputformat), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisoutputformat).-- **C++/C#/Java**: Added support for retrieving voice profile for speaker recognition scenario. Details for [C++](/cpp/cognitive-services/speech/speaker-speakerrecognizer), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speaker.speakerrecognizer), and [Java](/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer).-- **C++/C#/Java/Objective-C/Python**: Added support for separate shared library for audio microphone and speaker control. This allows to use the SDK in environments that do not have required audio library dependencies.-- **Objective-C/Swift**: Added support for module framework with umbrella header. This allows to import Speech SDK as a module in iOS/Mac Objective-C/Swift apps. This addresses [GitHub issue #452](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/452).-- **Python**: Added support for [Python 3.9](./quickstarts/setup-platform.md?pivots=programming-language-python) and dropped support for Python 3.5 per Python's [end-of-life for 3.5](https://devguide.python.org/devcycle/#end-of-life-branches).-
-**Known issues**
--- **C++/C#/Java**: `DialogServiceConnector` cannot use a `CustomCommandsConfig` to access a Custom Commands application and will instead encounter a connection error. This can be worked around by manually adding your application ID to the request with `config.SetServiceProperty("X-CommandsAppId", "your-application-id", ServicePropertyChannel.UriQueryParameter)`. The expected behavior of `CustomCommandsConfig` will be restored in the next release.-
-#### Improvements
--- As part of our multi release effort to reduce the Speech SDK's memory usage and disk footprint, Android binaries are now 3% to 5% smaller.-- Improved accuracy, readability and see-also sections of our C# reference documentation [here](/dotnet/api/microsoft.cognitiveservices.speech).-
-#### Bug fixes
--- **JavaScript**: Large WAV file headers are now parsed correctly (increases header slice to 512 bytes). This addresses [GitHub issue #962](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/962).-- **JavaScript**: Corrected microphone timing issue if mic stream ends before stop recognition, addressing an issue with Speech Recognition not working in Firefox.-- **JavaScript**: We now correctly handle initialization promise when the browser forces mic off before turnOn completes.-- **JavaScript**: We replaced URL dependency with url-parse. This addresses [GitHub issue #264](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/264).-- **Android**: Fixed callbacks not working when `minifyEnabled` is set to true.-- **C++/C#/Java/Objective-C/Python**: `TCP_NODELAY` will be correctly set to underlying socket IO for TTS to reduce latency.-- **C++/C#/Java/Python/Objective-C/Go**: Fixed an occasional crash when the recognizer was destroyed just after starting a recognition.-- **C++/C#/Java**: Fixed an occasional crash in the destruction of speaker recognizer.-
-#### Samples
--- **JavaScript**: [Browser samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser) no longer require separate JavaScript library file download.-
-## Speech CLI (also known as SPX): 2021-March release
-
-> [!NOTE]
-> Get started with the Azure Speech service command line interface (CLI) [here](spx-basics.md). The CLI enables you to use the Azure Speech service without writing any code.
-
-#### New features
--- Added `spx intent` command for intent recognition, replacing `spx recognize intent`.-- Recognize and intent can now use Azure functions to calculate word error rate using `spx recognize --wer url <URL>`.-- Recognize can now output results as VTT files using `spx recognize --output vtt file <FILENAME>`.-- Sensitive key info now obscured in debug/verbose output.-- Added URL checking and error message for content field in batch transcription create.-
-**COVID-19 abridged testing**:
-
-As the ongoing pandemic continues to require our engineers to work from home, pre-pandemic manual verification scripts have been significantly reduced. We test on fewer devices with fewer configurations, and the likelihood of environment-specific bugs slipping through may be increased. We still rigorously validate with a large set of automation. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Text-to-Speech 2021-February release
-
-**Custom Neural Voice GA**
-
-Custom Neural Voice is GA in February in 13 languages: Chinese (Mandarin, Simplified), English (Australia), English (India), English (United Kingdom), English (United States), French (Canada), French (France), German (Germany), Italian (Italy), Japanese (Japan), Korean (Korea), Portuguese (Brazil), Spanish (Mexico), and Spanish (Spain). Learn more about [what is Custom Neural Voice](custom-neural-voice.md) and [how to use it responsibly](concepts-guidelines-responsible-deployment-synthetic.md).
-Custom Neural Voice feature requires registration and Microsoft may limit access based on Microsoft's eligibility criteria. Learn more about the [limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context).
-
-## Speech SDK 1.15.0: 2021-January release
-
-> [!NOTE]
-> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-**Highlights summary**
-- Smaller memory and disk footprint making the SDK more efficient.-- Higher fidelity output formats available for custom neural voice private preview.-- Intent Recognizer can now get return more than the top intent, giving you the ability to make a separate assessment about your customer's intent.-- Your voice assistant or bot are now easier to set up, and you can make it stop listening immediately, and exercise greater control over how it responds to errors.-- Improved on device performance through making compression optional.-- Use the Speech SDK on Windows ARM/ARM64.-- Improved low level debugging.-- Pronunciation assessment feature is now more widely available.-- Several Bug fixes to address issues YOU, our valued customers, have flagged on GitHub! THANK YOU! Keep the feedback coming!-
-**Improvements**
-- The Speech SDK is now more efficient and lightweight. We have started a multi release effort to reduce the Speech SDK's memory usage and disk footprint. As a first step we made significant file size reductions in shared libraries on most platforms. Compared to the 1.14 release:
- - 64-bit UWP-compatible Windows libraries are about 30% smaller.
- - 32-bit Windows libraries are not yet seeing a size improvements.
- - Linux libraries are 20-25% smaller.
- - Android libraries are 3-5% smaller.
-
-**New features**
-- **All**: New 48KHz output formats available for the private preview of custom neural voice through the TTS speech synthesis API: Audio48Khz192KBitRateMonoMp3, audio-48khz-192kbitrate-mono-mp3, Audio48Khz96KBitRateMonoMp3, audio-48khz-96kbitrate-mono-mp3, Raw48Khz16BitMonoPcm, raw-48khz-16bit-mono-pcm, Riff48Khz16BitMonoPcm, riff-48khz-16bit-mono-pcm.-- **All**: Custom voice is also easier to use. Added support for setting custom voice via `EndpointId` ([C++](/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#endpointId), [Objective-C](/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like prebuilt voices, and then provide the deployment ID by setting `EndpointId`. This simplifies setting up custom voices.-- **C++/C#/Jav#add-a-languageunderstandingmodel-and-intents).-- **C++/C#/Java**: Make your voice assistant or bot stop listening immediately. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector)) now has a `StopListeningAsync()` method to accompany `ListenOnceAsync()`. This will immediately stop audio capture and gracefully wait for a result, making it perfect for use with "stop now" button-press scenarios.-- **C++/C#/Java/JavaScript**: Make your voice assistant or bot react better to underlying system errors. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector)) now has a new `TurnStatusReceived` event handler. These optional events correspond to every [`ITurnContext`](/dotnet/api/microsoft.bot.builder.iturncontext) resolution on the Bot and will report turn execution failures when they happen, e.g. as a result of an unhandled exception, timeout, or network drop between Direct Line Speech and the bot. `TurnStatusReceived` makes it easier to respond to failure conditions. For example, if a bot takes too long on a backend database query (e.g. looking up a product), `TurnStatusReceived` allows the client to know to reprompt with "sorry, I didn't quite get that, could you please try again" or something similar.-- **C++/C#**: Use the Speech SDK on more platforms. The [Speech SDK NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) now supports Windows ARM/ARM64 desktop native binaries (UWP was already supported) to make the Speech SDK more useful on more machine types.-- **Java**: [`DialogServiceConnector`](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector) now has a `setSpeechActivityTemplate()` method that was unintentionally excluded from the language previously. This is equivalent to setting the `Conversation_Speech_Activity_Template` property and will request that all future Bot Framework activities originated by the Direct Line Speech service merge the provided content into their JSON payloads.-- **Java**: Improved low level debugging. The [`Connection`](/java/api/com.microsoft.cognitiveservices.speech.connection) class now has a `MessageReceived` event, similar to other programing languages (C++, C#). This event provides low-level access to incoming data from the service and can be useful for diagnostics and debugging.-- **JavaScript**: Easier setup for Voice Assistants and bots through [`BotFrameworkConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/botframeworkconfig), which now has `fromHost()` and `fromEndpoint()` factory methods that simplify the use of custom service locations versus manually setting properties. We also standardized optional specification of `botId` to use a non-default bot across the configuration factories.-- **JavaScript**: Improved on device performance through added string control property for websocket compression. For performance reasons we disabled websocket compression by default. This can be reenabled for low bandwidth scenarios. More details [here](/javascript/api/microsoft-cognitiveservices-speech-sdk/propertyid). This addresses [GitHub issue #242](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/242).-- **JavaScript**: Added support for pronunciation assessment to enable evaluation of speech pronunciation. See the quickstart [here](./how-to-pronunciation-assessment.md?pivots=programming-language-javascript).-
-**Bug fixes**
-- **All** (except JavaScript): Fixed a regression in version 1.14, in which too much memory was allocated by the recognizer.-- **C++**: Fixed a garbage collection issue with `DialogServiceConnector`, addressing [GitHub issue #794](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/794).-- **C#**: Fixed an issue with thread shutdown that caused objects to block for about a second when disposed.-- **C++/C#/Java**: Fixed an exception preventing an application from setting speech authorization token or activity template more than once on a `DialogServiceConnector`.-- **C++/C#/Java**: Fixed a recognizer crash due to a race condition in teardown.-- **JavaScript**: [`DialogServiceConnector`](/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector) did not previously honor the optional `botId` parameter specified in `BotFrameworkConfig`'s factories. This made it necessary to set the `botId` query string parameter manually to use a non-default bot. The bug has been corrected and `botId` values provided to `BotFrameworkConfig`'s factories will be honored and used, including the new `fromHost()` and `fromEndpoint()` additions. This also applies to the `applicationId` parameter for `CustomCommandsConfig`.-- **JavaScript**: Fixed [GitHub issue #881](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/881), allowing recognizer object re-usage.-- **JavaScript**: Fixed an issue where the SKD was sending `speech.config` multiple times in one TTS session, wasting bandwidth.-- **JavaScript**: Simplified error handling on microphone authorization, allowing more descriptive message to bubble up when user has not allowed microphone input on their browser.-- **JavaScript**: Fixed [GitHub issue #249](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/249) where type errors in `ConversationTranslator` and `ConversationTranscriber` caused a compilation error for TypeScript users.-- **Objective-C**: Fixed an issue where GStreamer build failed for iOS on Xcode 11.4, addressing [GitHub issue #911](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/911).-- **Python**: Fixed [GitHub issue #870](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/870), removing "DeprecationWarning: the imp module is deprecated in favor of importlib".-
-**Samples**
-- [From-file sample for JavaScript browser](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/browser/from-file/https://docsupdatetracker.net/index.html) now uses files for speech recognition. This addresses [GitHub issue #884](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/884).-
-## Speech CLI (also known as SPX): 2021-January release
-
-**New features**
-- Speech CLI is now available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.CLI/) and can be installed via .NET CLI as a .NET global tool you can call from the shell/command line.-- The [Custom Speech DevOps Template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) has been updated to use Speech CLI for its Custom Speech workflows.-
-**COVID-19 abridged testing**:
-As the ongoing pandemic continues to require our engineers to work from home, pre-pandemic manual verification scripts have been significantly reduced. We test on fewer devices with fewer configurations, and the likelihood of environment-specific bugs slipping through may be increased. We still rigorously validate with a large set of automation. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Text-to-Speech 2020-December release
-
-**New neural voices in GA and preview**
-
-Released 51 new voices for a total of 129 neural voices across 54 languages/locales:
--- **46 new voices in GA locales**: Shakir in `ar-EG` Arabic (Egypt), Hamed in `ar-SA` Arabic (Saudi Arabia), Borislav in `bg-BG` Bulgarian (Bulgaria), Joana in `ca-ES` Catalan (Spain), Antonin in `cs-CZ` Czech (Czech Republic), Jeppe in `da-DK` Danish (Denmark), Jonas in `de-AT` German (Austria), Jan in `de-CH` German (Switzerland), Nestoras in `el-GR` Greek (Greece), Liam in `en-CA` English (Canada), Connor in `en-IE` English (Ireland), Madhur in `en-IN` Hindi (India), Mohan in `en-IN` Telugu (India), Prabhat in `en-IN` English (India), Valluvar in `en-IN` Tamil (India), Enric in `es-ES` Catalan (Spain), Kert in `et-EE` Estonian (Estonia), Harri in `fi-FI` Finnish (Finland), Selma in `fi-FI` Finnish (Finland), Fabrice in `fr-CH` French (Switzerland), Colm in `ga-IE` Irish (Ireland), Avri in `he-IL` Hebrew (Israel), Srecko in `hr-HR` Croatian (Croatia), Tamas in `hu-HU` Hungarian (Hungary), Gadis in `id-ID` Indonesian (Indonesia), Leonas in `lt-LT` Lithuanian (Lithuania), Nils in `lv-LV` Latvian (Latvia), Osman in `ms-MY` Malay (Malaysia), Joseph in `mt-MT` Maltese (Malta), Finn in `nb-NO` Norwegian, Bokmål (Norway), Pernille in `nb-NO` Norwegian, Bokmål (Norway), Fenna in `nl-NL` Dutch (Netherlands), Maarten in `nl-NL` Dutch (Netherlands), Agnieszka in `pl-PL` Polish (Poland), Marek in `pl-PL` Polish (Poland), Duarte in `pt-BR` Portuguese (Brazil), Raquel in `pt-PT` Portuguese (Potugal), Emil in `ro-RO` Romanian (Romania), Dmitry in `ru-RU` Russian (Russia), Svetlana in `ru-RU` Russian (Russia), Lukas in `sk-SK` Slovak (Slovakia), Rok in `sl-SI` Slovenian (Slovenia), Mattias in `sv-SE` Swedish (Sweden), Sofie in `sv-SE` Swedish (Sweden), Niwat in `th-TH` Thai (Thailand), Ahmet in `tr-TR` Turkish (Turkey), NamMinh in `vi-VN` Vietnamese (Vietnam), HsiaoChen in `zh-TW` Taiwanese Mandarin (Taiwan), YunJhe in `zh-TW` Taiwanese Mandarin (Taiwan), HiuMaan in `zh-HK` Chinese Cantonese (Hong Kong), WanLung in `zh-HK` Chinese Cantonese (Hong Kong).--- **5 new voices in preview locales**: Kert in `et-EE` Estonian (Estonia), Colm in `ga-IE` Irish (Ireland), Nils in `lv-LV` Latvian (Latvia), Leonas in `lt-LT` Lithuanian (Lithuania), Joseph in `mt-MT` Maltese (Malta).-
-With this release, we now support a total of 129 neural voices across 54 languages/locales. In addition, over 70 standard voices are available in 49 languages/locales. Visit [Language support](language-support.md#text-to-speech) for the full list.
-
-**Updates for Audio Content Creation**
-- Improved voice selection UI with voice categories and detailed voice descriptions.-- Enabled intonation tuning for all neural voices across different languages.-- Automated the UI localization based on the language of the browser.-- Enabled `StyleDegree` controls for all `zh-CN` Neural voices.
-Visit the [Audio Content Creation tool](https://speech.microsoft.com/audiocontentcreation) to check out the new features.
-
-**Updates for zh-CN voices**
-- Updated all `zh-CN` neural voices to support English speaking.-- Enabled all `zh-CN` neural voices to support intonation adjustment. SSML or Audio Content Creation tool can be used to adjust for the best intonation.-- Updated all `zh-CN` multi-style neural voices to support `StyleDegree` control. Emotion intensity (soft or strong) is adjustable.-- Updated `zh-CN-YunyeNeural` to support multiple styles which can perform different emotions.-
-## Text-to-Speech 2020-November release
-
-**New locales and voices in preview**
-- **Five new voices and languages** are introduced to the Neural TTS portfolio. They are: Grace in Maltese (Malta), Ona in Lithuanian (Lithuania), Anu in Estonian (Estonia), Orla in Irish (Ireland) and Everita in Latvian (Latvia).-- **Five new `zh-CN` voices with multiple styles and roles support**: Xiaohan, Xiaomo, Xiaorui, Xiaoxuan and Yunxi.-
-> These voices are available in public preview in three Azure regions: EastUS, SouthEastAsia and WestEurope.
-
-**Neural TTS Container GA**
-- With Neural TTS Container, developers can run speech synthesis with the most natural digital voices in their own environment for specific security and data governance requirements. Check [how to install Speech Containers](speech-container-howto.md).-
-**New features**
-- **Custom Voice**: enabled users to copy a voice model from one region to another; supported endpoint suspension and resuming. Go to the [portal](https://speech.microsoft.com/customvoice) here.-- [SSML silence tag](speech-synthesis-markup.md#add-silence) support.-- General TTS voice quality improvements: Improved word-level pronunciation accuracy in nb-NO. Reduced 53% pronunciation error.-
-> Read more at [this tech blog](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604).
-
-## Text-to-Speech 2020-October release
-
-**New features**
-- Jenny supports a new `newscast` style. See [how to use the speaking styles in SSML](speech-synthesis-markup.md#adjust-speaking-styles).-- **Neural voices upgraded to HiFiNet vocoder, with higher audio fidelity and faster synthesis speed**. This benefits customers whose scenario relies on hi-fi audio or long interactions, including video dubbing, audio books, or online education materials. [Read more about the story and hear the voice samples on our tech community blog](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860)-- **[Custom Voice](https://speech.microsoft.com/customvoice) & [Audio Content Creation Studio](https://speech.microsoft.com/audiocontentcreation) localized to 17 locales**. Users can easily switch the UI to a local language for a more friendly experience.-- **Audio Content Creation**: Added style degree control for XiaoxiaoNeural; Refined the customized break feature to include incremental breaks of 50ms.-
-**General TTS voice quality improvements**
-- Improved word-level pronunciation accuracy in `pl-PL` (error rate reduction: 51%) and `fi-FI` (error rate reduction: 58%)-- Improved `ja-JP` single word reading for the dictionary scenario. Reduced pronunciation error by 80%.-- `zh-CN-XiaoxiaoNeural`: Improved sentiment/CustomerService/Newscast/Cheerful/Angry style voice quality.-- `zh-CN`: Improved Erhua pronunciation and light tone and refined space prosody, which greatly improves intelligibility.-
-## Speech SDK 1.14.0: 2020-October release
-
-> [!NOTE]
-> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-**New features**
-- **Linux**: Added support for Debian 10 and Ubuntu 20.04 LTS.-- **Python/Objective-C**: Added support for the `KeywordRecognizer` API. Documentation will be [here](./custom-keyword-basics.md).-- **C++/Java/C#**: Added support to set any `HttpHeader` key/value via `ServicePropertyChannel::HttpHeader`.-- **JavaScript**: Added support for the `ConversationTranscriber` API. Read documentation [here](./how-to-use-conversation-transcription.md?pivots=programming-language-javascript).-- **C++/C#**: Added new `AudioDataStream FromWavFileInput` method (to read .WAV files) [here (C++)](/cpp/cognitive-services/speech/audiodatastream) and [here (C#)](/dotnet/api/microsoft.cognitiveservices.speech.audiodatastream).-- **C++/C#/Java/Python/Objective-C/Swift**: Added a `stopSpeakingAsync()` method to stop Text-to-Speech synthesis. Read the Reference documentation [here (C++)](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace), [here (C#)](/dotnet/api/microsoft.cognitiveservices.speech), [here (Java)](/java/api/com.microsoft.cognitiveservices.speech), [here (Python)](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech), and [here (Objective-C/Swift)](/objectivec/cognitive-services/speech/).-- **C#, C++, Java**: Added a `FromDialogServiceConnector()` function to the `Connection` class that can be used to monitor connection and disconnection events for `DialogServiceConnector`. Read the Reference documentation [here (C#)](/dotnet/api/microsoft.cognitiveservices.speech.connection), [here (C++)](/cpp/cognitive-services/speech/connection), and [here (Java)](/java/api/com.microsoft.cognitiveservices.speech.connection).-- **C++/C#/Jav).-
-**Breaking change**
-- **JavaScript**: PullAudioOutputStream.read() has a return type change from an internal Promise to a Native JavaScript Promise.-
-**Bug fixes**
-- **All**: Fixed 1.13 regression in `SetServiceProperty` where values with certain special characters were ignored.-- **C#**: Fixed Windows console samples on Visual Studio 2019 failing to find native DLLs.-- **C#**: Fixed crash with memory management if stream is used as `KeywordRecognizer` input.-- **ObjectiveC/Swift**: Fixed crash with memory management if stream is used as recognizer input.-- **Windows**: Fixed co-existence issue with BT HFP/A2DP on UWP.-- **JavaScript**: Fixed mapping of session IDs to improve logging and aid in internal debug/service correlations.-- **JavaScript**: Added fix for `DialogServiceConnector` disabling `ListenOnce` calls after the first call is made.-- **JavaScript**: Fixed issue where result output would only ever be "simple".-- **JavaScript**: Fixed continuous recognition issue in Safari on macOS.-- **JavaScript**: CPU load mitigation for high request throughput scenario.-- **JavaScript**: Allow access to details of Voice Profile Enrollment result.-- **JavaScript**: Added fix for continuous recognition in `IntentRecognizer`.-- **C++/C#/Java/Python/Swift/ObjectiveC**: Fixed incorrect url for australiaeast and brazilsouth in `IntentRecognizer`.-- **C++/C#**: Added `VoiceProfileType` as an argument when creating a `VoiceProfile` object.-- **C++/C#/Java/Python/Swift/ObjectiveC**: Fixed potential `SPX_INVALID_ARG` when trying to read `AudioDataStream` from a given position.-- **IOS**: Fixed crash with speech recognition on Unity-
-**Samples**
-- **ObjectiveC**: Added sample for keyword recognition [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/objective-c/ios/speech-samples).-- **C#/JavaScript**: Added quickstart for conversation transcription [here (C#)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/dotnet/conversation-transcription) and [here (JavaScript)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/node/conversation-transcription).-- **C++/C#/Java/Python/Swift/ObjectiveC**: Added sample for pronunciation assessment [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples)-- **Xamarin**: Updated quickstart to latest Visual Studio template [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/xamarin).-
-**Known Issue**
-- DigiCert Global Root G2 certificate is not supported by default in HoloLens 2 and Android 4.4 (KitKat) and needs to be added to the system to make the Speech SDK functional. The certificate will be added to HoloLens 2 OS images in the near future. Android 4.4 customers need to add the updated the certificate to the system.-
-**COVID-19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual verification testing as we normally do. We haven't made any changes we think could have broken anything, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Speech CLI (also known as SPX): 2020-October release
-SPX is the command line interface to use the Azure Speech service without writing code.
-Download the latest version [here](./spx-basics.md). <br>
-
-**New features**
-- `spx csr dataset upload --kind audio|language|acoustic` ΓÇô create datasets from local data, not just from URLs.-- `spx csr evaluation create|status|list|update|delete` ΓÇô compare new models against baseline truth/other models.-- `spx * list` ΓÇô supports non-paged experience (doesn't require --top X --skip X).-- `spx * --http header A=B` ΓÇô support custom headers (added for Office for custom authentication).-- `spx help` ΓÇô improved text and back-tick text color coded (blue).-
-## Text-to-Speech 2020-September release
-
-### New features
-
-* **Neural TTS**
- * **Extended to support 18 new languages/locales.** They are Bulgarian, Czech, German (Austria), German (Switzerland), Greek, English (Ireland), French (Switzerland), Hebrew, Croatian, Hungarian, Indonesian, Malay, Romanian, Slovak, Slovenian, Tamil, Telugu and Vietnamese.
- * **Released 14 new voices to enrich the variety in the existing languages.** See [full language and voice list](language-support.md#prebuilt-neural-voices).
- * **New speaking styles for `en-US` and `zh-CN` voices.** Jenny, the new voice in English (US), supports chatbot, customer service, and assistant styles. 10 new speaking styles are available with our zh-CN voice, XiaoXiao. In addition, the XiaoXiao neural voice supports `StyleDegree` tuning. See [how to use the speaking styles in SSML](speech-synthesis-markup.md#adjust-speaking-styles).
-
-* **Containers: Neural TTS Container released in public preview with 16 voices available in 14 languages.** Learn more on [how to deploy Speech Containers for Neural TTS](speech-container-howto.md)
-
-Read the [full announcement of the TTS updates for Ignite 2020](https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/ba-p/1698544)
-
-## Text-to-Speech 2020-August release
-
-### New features
-
-* **Neural TTS: new speaking style for `en-US` Aria voice**. AriaNeural can sound like a news caster when reading news. The 'newscast-formal' style sounds more serious, while the 'newscast-casual' style is more relaxed and informal. See [how to use the speaking styles in SSML](speech-synthesis-markup.md).
-
-* **Custom Voice: a new feature is released to automatically check training data quality**. When you upload your data, the system will examine various aspects of your audio and transcript data, and automatically fix or filter issues to improve the quality of the voice model. This covers the volume of your audio, the noise level, the pronunciation accuracy of speech, the alignment of speech with the normalized text, silence in the audio, in addition to the audio and script format.
-
-* **Audio Content Creation: a set of new features to enable more powerful voice tuning and audio management capabilities**.
-
- * Pronunciation: the pronunciation tuning feature is updated to the latest phoneme set. You can pick the right phoneme element from the library and refine the pronunciation of the words you have selected.
-
- * Download: The audio "Download"/"Export" feature is enhanced to support generating audio by paragraph. You can edit content in the same file/SSML, while generating multiple audio outputs. The file structure of "Download" is refined as well. Now, you can easily get all audio files in one folder.
-
- * Task status : The multi-file export experience is improved. When you export multiple files in the past, if one of the files has failed, the entire task will fail. But now, all other files will be successfully exported. The task report is enriched with more detailed and structured information. You can check the logs for all failed files and sentences now with the report.
-
- * SSML documentation: linked to SSML document to help you check the rules for how to use all tuning features.
-
-* **The Voice List API is updated to include a user-friendly display name and the speaking styles supported for neural voices**.
-
-### General TTS voice quality improvements
-
-* Reduced word-level pronunciation error % for `ru-RU` (errors reduced by 56%) and `sv-SE` (errors reduced by 49%)
-
-* Improved polyphony word reading on `en-US` neural voices by 40%. Examples of polyphony words include "read", "live", "content", "record", "object", etc.
-
-* Improved the naturalness of the question tone in `fr-FR`. MOS (Mean Opinion Score) gain: +0.28
-
-* Updated the vocoders for the following voices, with fidelity improvements and overall performance speed-up by 40%.
-
- | Locale | Voice |
- |||
- | `en-GB` | Mia |
- | `es-MX` | Dalia |
- | `fr-CA` | Sylvie |
- | `fr-FR` | Denise |
- | `ja-JP` | Nanami |
- | `ko-KR` | Sun-Hi |
-
-### Bug fixes
-
-* Fixed a number of bugs with the Audio Content Creation tool
- * Fixed issue with auto refreshing.
- * Fixed issues with voice styles in zh-CN in the South East Asia region.
- * Fixed stability issue, including an export error with the 'break' tag, and errors in punctuation.
-
-## New speech-to-text locales: 2020-August Release
-Speech-to-text released 26 new locales in August: 2 European languages `cs-CZ` and `hu-HU`, 5 English locales and 19 Spanish locales that cover most South American countries. Below is a list of the new locales. See the complete language list [here](./language-support.md).
-
-| Locale | Language |
-||--|
-| `cs-CZ` | Czech (Czech Republic) |
-| `en-HK` | English (Hong Kong) |
-| `en-IE` | English (Ireland) |
-| `en-PH` | English (Philippines) |
-| `en-SG` | English (Singapore) |
-| `en-ZA` | English (South Africa) |
-| `es-AR` | Spanish (Argentina) |
-| `es-BO` | Spanish (Bolivia) |
-| `es-CL` | Spanish (Chile) |
-| `es-CO` | Spanish (Colombia) |
-| `es-CR` | Spanish (Costa Rica) |
-| `es-CU` | Spanish (Cuba) |
-| `es-DO` | Spanish (Dominican Republic) |
-| `es-EC` | Spanish (Ecuador) |
-| `es-GT` | Spanish (Guatemala) |
-| `es-HN` | Spanish (Honduras) |
-| `es-NI` | Spanish (Nicaragua) |
-| `es-PA` | Spanish (Panama) |
-| `es-PE` | Spanish (Peru) |
-| `es-PR` | Spanish (Puerto Rico) |
-| `es-PY` | Spanish (Paraguay) |
-| `es-SV` | Spanish (El Salvador) |
-| `es-US` | Spanish (USA) |
-| `es-UY` | Spanish (Uruguay) |
-| `es-VE` | Spanish (Venezuela) |
-| `hu-HU` | Hungarian (Hungary) |
--
-## Speech SDK 1.13.0: 2020-July release
-
-> [!NOTE]
-> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download and install it from [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-**New features**
-- **C#**: Added support for asynchronous conversation transcription. See documentation [here](./how-to-async-conversation-transcription.md).-- **JavaScript**: Added Speaker Recognition support for both [browser](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/browser/speaker-recognition) and [node.js](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/node/speaker-recognition).-- **JavaScript**: Added support for language identification/language ID. See documentation [here](./how-to-automatic-language-detection.md?pivots=programming-language-javascript).-- **Objective-C**: Added support for [multi-device conversation](./multi-device-conversation.md) and [conversation transcription](./conversation-transcription.md).-- **Python**: Added compressed audio support for Python on Windows and Linux. See documentation [here](./how-to-use-codec-compressed-audio-input-streams.md).-
-**Bug fixes**
-- **All**: Fixed an issue that caused the KeywordRecognizer to not move forward the streams after a recognition.-- **All**: Fixed an issue that caused the stream obtained from a KeywordRecognitionResult to not contain the keyword.-- **All**: Fixed an issue that the SendMessageAsync does not really send the message over the wire after the users finish waiting for it.-- **All**: Fixed a crash in Speaker Recognition APIs when users call VoiceProfileClient::SpeakerRecEnrollProfileAsync method multiple times and did not wait for the calls to finish.-- **All**: Fixed enable file logging in VoiceProfileClient and SpeakerRecognizer classes.-- **JavaScript**: Fixed an [issue](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/74) with throttling when browser is minimized.-- **JavaScript**: Fixed an [issue](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/78) with a memory leak on streams.-- **JavaScript**: Added caching for OCSP responses from NodeJS.-- **Java**: Fixed an issue that was causing BigInteger fields to always return 0.-- **iOS**: Fixed an [issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/702) with publishing Speech SDK-based apps in the iOS App Store.-
-**Samples**
-- **C++**: Added sample code for Speaker Recognition [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/cpp/windows/console/samples/speaker_recognition_samples.cpp).-
-**COVID-19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual verification testing as we normally do. We haven't made any changes we think could have broken anything, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Text-to-Speech 2020-July release
-
-### New features
-
-* **Neural TTS, 15 new neural voices**: The new voices added to the Neural TTS portfolio are Salma in `ar-EG` Arabic (Egypt), Zariyah in `ar-SA` Arabic (Saudi Arabia), Alba in `ca-ES` Catalan (Spain), Christel in `da-DK` Danish (Denmark), Neerja in `es-IN` English (India), Noora in `fi-FI` Finnish (Finland), Swara in `hi-IN` Hindi (India), Colette in `nl-NL` Dutch (Netherlands), Zofia in `pl-PL` Polish (Poland), Fernanda in `pt-PT` Portuguese (Portugal), Dariya in `ru-RU` Russian (Russia), Hillevi in `sv-SE` Swedish (Sweden), Achara in `th-TH` Thai (Thailand), HiuGaai in `zh-HK` Chinese (Cantonese, Traditional) and HsiaoYu in `zh-TW` Chinese (Taiwanese Mandarin). Check all [supported languages](./language-support.md#prebuilt-neural-voices).
-
-* **Custom Voice, streamlined voice testing with the training flow to simplify user experience**: With the new testing feature, each voice will be automatically tested with a predefined test set optimized for each language to cover general and voice assistant scenarios. These test sets are carefully selected and tested to include typical use cases and phonemes in the language. Besides, users can still select to upload their own test scripts when training a model.
-
-* **Audio Content Creation: a set of new features is released to enable more powerful voice tuning and audio management capabilities**
-
- * `Pitch`, `rate`, and `volume` are enhanced to support tuning with a predefined value, like slow, medium and fast. It's now straightforward for users to pick a 'constant' value for their audio editing.
-
- ![Audio tuning](media/release-notes/audio-tuning.png)
-
- * Users can now review the `Audio history` for their work file. With this feature, users can easily track all the generated audio related to a working file. They can check the history version and compare the quality while tuning at the same time.
-
- ![Audio history](media/release-notes/audio-history.png)
-
- * The `Clear` feature is now more flexible. Users can clear a specific tuning parameter while keeping other parameters available for the selected content.
-
- * A tutorial video was added on the [landing page](https://speech.microsoft.com/audiocontentcreation) to help users quickly get started with TTS voice tuning and audio management.
-
-### General TTS voice quality improvements
-
-* Improved TTS vocoder in for higher fidelity and lower latency.
-
- * Updated Elsa in `it-IT` to a new vocoder which achieved +0.464 CMOS (Comparative Mean Opinion Score) gain in voice quality, 40% faster in synthesis and 30% reduction on first byte latency.
- * Updated Xiaoxiao in `zh-CN` to the new vocoder with +0148 CMOS gain for the general domain, +0.348 for the newscast style and +0.195 for the lyrical style.
-
-* Updated `de-DE` and `ja-JP` voice models to make the TTS output more natural.
-
- * Updated Katja in `de-DE` with the latest prosody modeling method, the MOS (Mean Opinion Score) gain is +0.13.
- * Updated Nanami in `ja-JP` with a new pitch accent prosody model, the MOS (Mean Opinion Score) gain is +0.19;
-
-* Improved word-level pronunciation accuracy in five languages.
-
- | Language | Pronunciation error reduction |
- |||
- | `en-GB` | 51% |
- | `ko-KR` | 17% |
- | `pt-BR` | 39% |
- | `pt-PT` | 77% |
- | `id-ID` | 46% |
-
-### Bug fixes
-
-* Currency reading
- * Fixed the issue with currency reading for `es-ES` and `es-MX`
-
- | Language | Input | Readout after improvement |
- ||||
- | `es-MX` | $1.58 | un peso cincuenta y ocho centavos |
- | `es-ES` | $1.58 | un d├│lar cincuenta y ocho centavos |
-
- * Support for negative currency (like "-325ΓÇ»&euro;" ) in following locales: `en-US`, `en-GB`, `fr-FR`, `it-IT`, `en-AU`, `en-CA`.
-
-* Improved address reading in `pt-PT`.
-* Fixed Natasha (`en-AU`) and Libby (`en-UK`) pronunciation issues on the word "for" and "four".
-* Fixed bugs on Audio Content Creation tool
- * The additional and unexpected pause after the second paragraph is fixed.
- * 'No break' feature is added back from a regression bug.
- * The random refresh issue of Speech Studio is fixed.
-
-### Samples/SDK
-
-* JavaScript: Fixes playback issue in Firefox, and Safari on macOS and iOS.
-
-## Speech SDK 1.12.1: 2020-June release
-**Speech CLI (also known as SPX)**
-- Added in-CLI help search features:
- - `spx help find --text TEXT`
- - `spx help find --topic NAME`
-- Updated to work with newly deployed v3.0 Batch and Custom Speech APIs:
- - `spx help batch examples`
- - `spx help csr examples`
-
-**New features**
-- **C\#, C++**: Speaker Recognition Preview: This feature enables speaker identification (who is speaking?) and speaker verification (is the speaker who they claim to be?). Start with an [overview](./speaker-recognition-overview.md), read the [Speaker Recognition basics article](./get-started-speaker-recognition.md), or the [API reference docs](/rest/api/speakerrecognition/).-
-**Bug fixes**
-- **C\#, C++**: Fixed microphone recording was not working in 1.12 in speaker recognition.-- **JavaScript**: Fixes for Text-To-Speech in Firefox, and Safari on macOS and iOS.-- Fix for Windows application verifier access violation crash on conversation transcription when using eight-channel stream.-- Fix for Windows application verifier access violation crash on multi-device conversation translation.-
-**Samples**
-- **C#**: [Code sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/dotnet/speaker-recognition) for speaker recognition.-- **C++**: [Code sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/cpp/windows/speaker-recognition) for speaker recognition.-- **Java**: [Code sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/intent-recognition) for intent recognition on Android. -
-**COVID-19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual verification testing as we normally do. We haven't made any changes we think could have broken anything, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
--
-## Speech SDK 1.12.0: 2020-May release
-**Speech CLI (Also Know As SPX)**
-- **SPX** is a new command line tool that allows you to perform recognition, synthesis, translation, batch transcription, and custom speech management from the command line. Use it to test the Speech Service, or to script the Speech Service tasks you need to perform. Download the tool and read the documentation [here](./spx-overview.md).-
-**New features**
-- **Go**: New Go language support for [speech recognition](./get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](./quickstarts/voice-assistants.md?pivots=programming-language-go). Set up your dev environment [here](./quickstarts/setup-platform.md?pivots=programming-language-go). For sample code, see the Samples section below.-- **JavaScript**: Added Browser support for Text-To-Speech. See documentation [here](./get-started-text-to-speech.md?pivots=programming-language-JavaScript).-- **C++, C#, Java**: New `KeywordRecognizer` object and APIs supported on Windows, Android, Linux & iOS platforms. Read the documentation [here](./keyword-recognition-overview.md). For sample code, see the Samples section below.-- **Java**: Added multi-device conversation with translation support. See the reference doc [here](/java/api/com.microsoft.cognitiveservices.speech.transcription).-
-**Improvements & Optimizations**
-- **JavaScript**: Optimized browser microphone implementation improving speech recognition accuracy.-- **Java**: Refactored bindings using direct JNI implementation without SWIG. This change reduces by 10x the bindings size for all Java packages used for Windows, Android, Linux and Mac and eases further development of the Speech SDK Java implementation.-- **Linux**: Updated support [documentation](./speech-sdk.md?tabs=linux) with the latest RHEL 7 specific notes.-- Improved connection logic to attempt connecting multiple times when service and network errors occur.-- Updated the [portal.azure.com](https://portal.azure.com) Speech Quickstart page to help developers take the next step in the Azure Speech journey.-
-**Bug fixes**
-- **C#, Java**: Fixed an [issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/587) with loading SDK libraries on Linux ARM (both 32 bit and 64 bit).-- **C#**: Fixed explicit disposal of native handles for TranslationRecognizer, IntentRecognizer and Connection objects.-- **C#**: Fixed audio input lifetime management for ConversationTranscriber object.-- Fixed an issue where `IntentRecognizer` result reason was not set properly when recognizing intents from simple phrases.-- Fixed an issue where `SpeechRecognitionEventArgs` result offset was not set correctly.-- Fixed a race condition where SDK was trying to send a network message before opening the websocket connection. Was reproducible for `TranslationRecognizer` while adding participants.-- Fixed memory leaks in the keyword recognizer engine.-
-**Samples**
-- **Go**: Added quickstarts for [speech recognition](./get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](./quickstarts/voice-assistants.md?pivots=programming-language-go). Find sample code [here](https://github.com/microsoft/cognitive-services-speech-sdk-go/tree/master/samples).-- **JavaScript**: Added quickstarts for [Text-to-Speech](./get-started-text-to-speech.md?pivots=programming-language-javascript), [Translation](./get-started-speech-translation.md?pivots=programming-language-csharp&tabs=script), and [Intent Recognition](./get-started-intent-recognition.md?pivots=programming-language-javascript).-- Keyword recognition samples for [C\#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer) and [Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer) (Android).  -
-**COVID-19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual verification testing as we normally do. We haven't made any changes we think could have broken anything, and our automated tests all passed. If we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Speech SDK 1.11.0: 2020-March release
-**New features**
-- Linux: Added support for Red Hat Enterprise Linux (RHEL)/CentOS 7 x64 with [instructions](./how-to-configure-rhel-centos-7.md) on how to configure the system for Speech SDK.-- Linux: Added support for .NET Core C# on Linux ARM32 and ARM64. Read more [here](./speech-sdk.md?tabs=linux).-- C#, C++: Added `UtteranceId` in `ConversationTranscriptionResult`, a consistent ID across all the intermediates and final speech recognition result. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.transcription.conversationtranscriptionresult), [C++](/cpp/cognitive-services/speech/transcription-conversationtranscriptionresult).-- Python: Added support for `Language ID`. See speech_sample.py in [GitHub repo](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/python/console).-- Windows: Added compressed audio input format support on Windows platform for all the win32 console applications. Details [here](./how-to-use-codec-compressed-audio-input-streams.md).-- JavaScript: Support speech synthesis (Text-to-Speech) in NodeJS. Learn more [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/node/text-to-speech).-- JavaScript: Add new API's to enable inspection of all send and received messages. Learn more [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript).-
-**Bug fixes**
-- C#, C++: Fixed an issue so `SendMessageAsync` now sends binary message as binary type. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.connection.sendmessageasync#Microsoft_CognitiveServices_Speech_Connection_SendMessageAsync_System_String_System_Byte___System_UInt32_), [C++](/cpp/cognitive-services/speech/connection).-- C#, C++: Fixed an issue where using `Connection MessageReceived` event may cause crash if `Recognizer` is disposed before `Connection` object. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.connection.messagereceived), [C++](/cpp/cognitive-services/speech/connection#messagereceived).-- Android: Audio buffer size from microphone decreased from 800ms to 100ms to improve latency.-- Android: Fixed an [issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/563) with x86 Android emulator in Android Studio.-- JavaScript: Added support for Regions in China with the `fromSubscription` API. Details [here](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#fromsubscription-string--string-).-- JavaScript: Add more error information for connection failures from NodeJS.-
-**Samples**
-- Unity: Intent recognition public sample is fixed, where LUIS json import was failing. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/369).-- Python: Sample added for `Language ID`. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py).-
-**Covid19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual device verification testing as we normally do. For example, we couldn't test microphone input and speaker output on Linux, iOS, and macOS. We haven't made any changes we think could have broken anything on these platforms, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Thank you for your continued support. As always, please post questions or feedback on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen) or [Stack Overflow](https://stackoverflow.microsoft.com/questions/tagged/731).<br>
-Stay healthy!
-
-## Speech SDK 1.10.0: 2020-February release
-
-**New features**
-
- > [!NOTE]
- > Customers must configure OpenSSL according to [these instructions](./how-to-configure-openssl-linux.md).
-
-**Bug fixes**
--
- **Samples**
--
-**Other changes**
--
-## Speech SDK 1.9.0: 2020-January release
-
-**New Features**
--- Multi-device conversation: connect multiple devices to the same speech or text-based conversation, and optionally translate messages sent between them. Learn more in [this article](multi-device-conversation.md).-- Keyword recognition support added for Android .aar package and added support for x86 and x64 flavors.-- Objective-C: `SendMessage` and `SetMessageProperty` methods added to `Connection` object. See documentation [here](/objectivec/cognitive-services/speech/spxconnection).-- TTS C++ api now supports `std::wstring` as synthesis text input, removing the need to convert a wstring to string before passing it to the SDK. See details [here](/cpp/cognitive-services/speech/speechsynthesizer#speaktextasync).-- C#: [Language ID](./how-to-automatic-language-detection.md?pivots=programming-language-csharp) and [source language config](./how-to-specify-source-language.md?pivots=programming-language-csharp) are now available.-- JavaScript: Added a feature to `Connection` object to pass through custom messages from the Speech Service as callback `receivedServiceMessage`.-- JavaScript: Added support for `FromHost API` to ease use with on-prem containers and sovereign clouds. See documentation [here](speech-container-howto.md).-- JavaScript: We now honor `NODE_TLS_REJECT_UNAUTHORIZED` thanks to a contribution from [orgads](https://github.com/orgads). See details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/pull/75).-
-**Breaking changes**
--- `OpenSSL` has been updated to version 1.1.1b and is statically linked to the Speech SDK core library for Linux. This may cause a break if your inbox `OpenSSL` has not been installed to the `/usr/lib/ssl` directory in the system. Please check [our documentation](how-to-configure-openssl-linux.md) under Speech SDK docs to work around the issue.-- We have changed the data type returned for C# `WordLevelTimingResult.Offset` from `int` to `long` to allow for access to `WordLevelTimingResults` when speech data is longer than 2 minutes.-- `PushAudioInputStream` and `PullAudioInputStream` now send wav header information to the Speech Service based on `AudioStreamFormat`, optionally specified when they were created. Customers must now use the [supported audio input format](how-to-use-audio-input-streams.md). Any other formats will get suboptimal recognition results or may cause other issues.-
-**Bug fixes**
--- See the `OpenSSL` update under Breaking changes above. We fixed both an intermittent crash and a performance issue (lock contention under high load) in Linux and Java.-- Java: Made improvements to object closure in high concurrency scenarios.-- Restructured our NuGet package. We removed the three copies of `Microsoft.CognitiveServices.Speech.core.dll` and `Microsoft.CognitiveServices.Speech.extension.kws.dll` under lib folders, making the NuGet package smaller and faster to download, and we added headers needed to compile some C++ native apps.-- Fixed quickstart samples [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/cpp). These were exiting without displaying "microphone not found" exception on Linux, macOS, Windows.-- Fixed SDK crash with long speech recognition results on certain code paths like [this sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/csharp/uwp/speechtotext-uwp).-- Fixed SDK deployment error in Azure Web App environment to address [this customer issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/396).-- Fixed a TTS error while using multi `<voice>` tag or `<audio>` tag to address [this customer issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/433).-- Fixed a TTS 401 error when the SDK is recovered from suspended.-- JavaScript: Fixed a circular import of audio data thanks to a contribution from [euirim](https://github.com/euirim).-- JavaScript: added support for setting service properties, as added in 1.7.-- JavaScript: fixed an issue where a connection error could result in continuous, unsuccessful websocket reconnect attempts.-
-**Samples**
--- Added keyword recognition sample for Android [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java/android/sdkdemo).-- Added TTS sample for the server scenario [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_server_scenario_sample.cs).-- Added Multi-device conversation quickstarts for C# and C++ [here](quickstarts/multi-device-conversation.md).-
-**Other changes**
--- Optimized SDK core library size on Android.-- SDK in 1.9.0 and onwards supports both `int` and `string` types in the voice signature version field for Conversation Transcriber.-
-## Speech SDK 1.8.0: 2019-November release
-
-**New Features**
--- Added a `FromHost()` API, to ease use with on-prem containers and sovereign clouds.-- Added Source Language Identification for Speech Recognition (in Java and C++)-- Added `SourceLanguageConfig` object for Speech Recognition, used to specify expected source languages (in Java and C++)-- Added `KeywordRecognizer` support on Windows (UWP), Android and iOS through the NuGet and Unity packages-- Added Remote Conversation Java API to do Conversation Transcription in asynchronous batches.-
-**Breaking changes**
--- Conversation Transcriber functionalities moved under namespace `Microsoft.CognitiveServices.Speech.Transcription`.-- Parts of the Conversation Transcriber methods are moved to new `Conversation` class.-- Dropped support for 32-bit (ARMv7 and x86) iOS-
-**Bug fixes**
--- Fix for crash if local `KeywordRecognizer` is used without a valid Speech service subscription key-
-**Samples**
--- Xamarin sample for `KeywordRecognizer`-- Unity sample for `KeywordRecognizer`-- C++ and Java samples for Automatic Source Language Identification.-
-## Speech SDK 1.7.0: 2019-September release
-
-**New Features**
--- Added beta support for Xamarin on Universal Windows Platform (UWP), Android, and iOS-- Added iOS support for Unity-- Added `Compressed` input support for ALaw, Mulaw, FLAC on Android, iOS and Linux-- Added `SendMessageAsync` in `Connection` class for sending a message to service-- Added `SetMessageProperty` in `Connection` class for setting property of a message-- TTS added bindings for Java (JRE and Android), Python, Swift, and Objective-C-- TTS added playback support for macOS, iOS, and Android.-- Added "word boundary" information for TTS.-
-**Bug fixes**
--- Fixed IL2CPP build issue on Unity 2019 for Android-- Fixed issue with malformed headers in wav file input being processed incorrectly-- Fixed issue with UUIDs not being unique in some connection properties-- Fixed a few warnings about nullability specifiers in the Swift bindings (might require small code changes)-- Fixed a bug that caused websocket connections to be closed ungracefully under network load-- Fixed an issue on Android that sometimes results in duplicate impression IDs used by `DialogServiceConnector`-- Improvements to the stability of connections across multi-turn interactions and the reporting of failures (via `Canceled` events) when they occur with `DialogServiceConnector`-- `DialogServiceConnector` session starts will now properly provide events, including when calling `ListenOnceAsync()` during an active `StartKeywordRecognitionAsync()`-- Addressed a crash associated with `DialogServiceConnector` activities being received-
-**Samples**
--- Quickstart for Xamarin-- Updated CPP Quickstart with Linux ARM64 information-- Updated Unity quickstart with iOS information-
-## Speech SDK 1.6.0: 2019-June release
-
-**Samples**
--- Quickstart samples for Text To Speech on UWP and Unity-- Quickstart sample for Swift on iOS-- Unity samples for Speech & Intent Recognition and Translation-- Updated quickstart samples for `DialogServiceConnector`-
-**Improvements / Changes**
--- Dialog namespace:
- - `SpeechBotConnector` has been renamed to `DialogServiceConnector`
- - `BotConfig` has been renamed to `DialogServiceConfig`
- - `BotConfig::FromChannelSecret()` has been remapped to `DialogServiceConfig::FromBotSecret()`
- - All existing Direct Line Speech clients continue to be supported after the rename
-- Update TTS REST adapter to support proxy, persistent connection-- Improve error message when an invalid region is passed-- Swift/Objective-C:
- - Improved error reporting: Methods that can result in an error are now present in two versions: One that exposes an `NSError` object for error handling, and one that raises an exception. The former are exposed to Swift. This change requires adaptations to existing Swift code.
- - Improved event handling
-
-**Bug fixes**
--- Fix for TTS: where `SpeakTextAsync` future returned without waiting until audio has completed rendering-- Fix for marshaling strings in C# to enable full language support-- Fix for .NET core app problem to load core library with net461 target framework in samples-- Fix for occasional issues to deploy native libraries to the output folder in samples-- Fix for web socket closing reliably-- Fix for possible crash while opening a connection under heavy load on Linux-- Fix for missing metadata in the framework bundle for macOS-- Fix for problems with `pip install --user` on Windows-
-## Speech SDK 1.5.1
-
-This is a bug fix release and only affecting the native/managed SDK. It is not affecting the JavaScript version of the SDK.
-
-**Bug fixes**
--- Fix FromSubscription when used with Conversation Transcription.-- Fix bug in keyword spotting for voice assistants.-
-## Speech SDK 1.5.0: 2019-May release
-
-**New features**
--- Keyword spotting (KWS) is now available for Windows and Linux. KWS functionality might work with any microphone type, official KWS support, however, is currently limited to the microphone arrays found in the Azure Kinect DK hardware or the Speech Devices SDK.-- Phrase hint functionality is available through the SDK. For more information, see [here](./get-started-speech-to-text.md).-- Conversation transcription functionality is available through the SDK. See [here](./conversation-transcription.md).-- Add support for voice assistants using the Direct Line Speech channel.-
-**Samples**
--- Added samples for new features or new services supported by the SDK.-
-**Improvements / Changes**
--- Added various recognizer properties to adjust service behavior or service results (like masking profanity and others).-- You can now configure the recognizer through the standard configuration properties, even if you created the recognizer `FromEndpoint`.-- Objective-C: `OutputFormat` property was added to `SPXSpeechConfiguration`.-- The SDK now supports Debian 9 as a Linux distribution.-
-**Bug fixes**
--- Fixed a problem where the speaker resource was destructed too early in Text-to-Speech.-
-## Speech SDK 1.4.2
-
-This is a bug fix release and only affecting the native/managed SDK. It is not affecting the JavaScript version of the SDK.
-
-## Speech SDK 1.4.1
-
-This is a JavaScript-only release. No features have been added. The following fixes were made:
--- Prevent web pack from loading https-proxy-agent.-
-## Speech SDK 1.4.0: 2019-April release
-
-**New features**
--- The SDK now supports the Text-to-Speech service as a beta version. It is supported on Windows and Linux Desktop from C++ and C#. For more information, check the [Text-to-Speech overview](text-to-speech.md#get-started).-- The SDK now supports MP3 and Opus/OGG audio files as stream input files. This feature is available only on Linux from C++ and C# and is currently in beta (more details [here](how-to-use-codec-compressed-audio-input-streams.md)).-- The Speech SDK for Java, .NET core, C++ and Objective-C have gained macOS support. The Objective-C support for macOS is currently in beta.-- iOS: The Speech SDK for iOS (Objective-C) is now also published as a CocoaPod.-- JavaScript: Support for non-default microphone as an input device.-- JavaScript: Proxy support for Node.js.-
-**Samples**
--- Samples for using the Speech SDK with C++ and with Objective-C on macOS have been added.-- Samples demonstrating the usage of the Text-to-Speech service have been added.-
-**Improvements / Changes**
--- Python: Additional properties of recognition results are now exposed via the `properties` property.-- For additional development and debug support, you can redirect SDK logging and diagnostics information into a log file (more details [here](how-to-use-logging.md)).-- JavaScript: Improve audio processing performance.-
-**Bug fixes**
--- Mac/iOS: A bug that led to a long wait when a connection to the Speech service could not be established was fixed.-- Python: improve error handling for arguments in Python callbacks.-- JavaScript: Fixed wrong state reporting for speech ended on RequestSession.-
-## Speech SDK 1.3.1: 2019-February refresh
-
-This is a bug fix release and only affecting the native/managed SDK. It is not affecting the JavaScript version of the SDK.
-
-**Bug fix**
--- Fixed a memory leak when using microphone input. Stream based or file input is not affected.-
-## Speech SDK 1.3.0: 2019-February release
-
-**New Features**
--- The Speech SDK supports selection of the input microphone through the `AudioConfig` class. This allows you to stream audio data to the Speech service from a non-default microphone. For more information, see the documentation describing [audio input device selection](how-to-select-audio-input-devices.md). This feature is not yet available from JavaScript.-- The Speech SDK now supports Unity in a beta version. Provide feedback through the issue section in the [GitHub sample repository](https://aka.ms/csspeech/samples). This release supports Unity on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86). More information is available in our [Unity quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=unity).-- The file `Microsoft.CognitiveServices.Speech.csharp.bindings.dll` (shipped in previous releases) isn't needed anymore. The functionality is now integrated into the core SDK.-
-**Samples**
-
-The following new content is available in our [sample repository](https://aka.ms/csspeech/samples):
--- Additional samples for `AudioConfig.FromMicrophoneInput`.-- Additional Python samples for intent recognition and translation.-- Additional samples for using the `Connection` object in iOS.-- Additional Java samples for translation with audio output.-- New sample for use of the [Batch Transcription REST API](batch-transcription.md).-
-**Improvements / Changes**
--- Python
- - Improved parameter verification and error messages in `SpeechConfig`.
- - Add support for the `Connection` object.
- - Support for 32-bit Python (x86) on Windows.
- - The Speech SDK for Python is out of beta.
-- iOS
- - The SDK is now built against the iOS SDK version 12.1.
- - The SDK now supports iOS versions 9.2 and later.
- - Improve reference documentation and fix several property names.
-- JavaScript
- - Add support for the `Connection` object.
- - Add type definition files for bundled JavaScript
- - Initial support and implementation for phrase hints.
- - Return properties collection with service JSON for recognition
-- Windows DLLs do now contain a version resource.-- If you create a recognizer `FromEndpoint` you can add parameters directly to the endpoint URL. Using `FromEndpoint` you can't configure the recognizer through the standard configuration properties.-
-**Bug fixes**
--- Empty proxy username and proxy password were not handled correctly. With this release, if you set proxy username and proxy password to an empty string, they will not be submitted when connecting to the proxy.-- SessionId's created by the SDK were not always truly random for some languages&nbsp;/ environments. Added random generator initialization to fix this issue.-- Improve handling of authorization token. If you want to use an authorization token, specify in the `SpeechConfig` and leave the subscription key empty. Then create the recognizer as usual.-- In some cases the `Connection` object wasn't released correctly. This issue has been fixed.-- The JavaScript sample was fixed to support audio output for translation synthesis also on Safari.-
-## Speech SDK 1.2.1
-
-This is a JavaScript-only release. No features have been added. The following fixes were made:
--- Fire end of stream at turn.end, not at speech.end.-- Fix bug in audio pump that did not schedule next send if the current send failed.-- Fix continuous recognition with auth token.-- Bug fix for different recognizer / endpoints.-- Documentation improvements.-
-## Speech SDK 1.2.0: 2018-December release
-
-**New Features**
--- Python
- - The Beta version of Python support (3.5 and above) is available with this release. For more information, see here](quickstart-python.md).
-- JavaScript
- - The Speech SDK for JavaScript has been open-sourced. The source code is available on [GitHub](https://github.com/Microsoft/cognitive-services-speech-sdk-js).
- - We now support Node.js, more info can be found [here](./get-started-speech-to-text.md).
- - The length restriction for audio sessions has been removed, reconnection will happen automatically under the cover.
-- `Connection` object
- - From the `Recognizer`, you can access a `Connection` object. This object allows you to explicitly initiate the service connection and subscribe to connect and disconnect events.
- (This feature is not yet available from JavaScript and Python.)
-- Support for Ubuntu 18.04.-- Android
- - Enabled ProGuard support during APK generation.
-
-**Improvements**
--- Improvements in the internal thread usage, reducing the number of threads, locks, mutexes.-- Improved error reporting / information. In several cases, error messages have not been propagated out all the way out.-- Updated development dependencies in JavaScript to use up-to-date modules.-
-**Bug fixes**
--- Fixed memory leaks due to a type mismatch in `RecognizeAsync`.-- In some cases exceptions were being leaked.-- Fixing memory leak in translation event arguments.-- Fixed a locking issue on reconnect in long running sessions.-- Fixed an issue that could lead to missing final result for failed translations.-- C#: If an `async` operation wasn't awaited in the main thread, it was possible the recognizer could be disposed before the async task was completed.-- Java: Fixed a problem resulting in a crash of the Java VM.-- Objective-C: Fixed enum mapping; RecognizedIntent was returned instead of `RecognizingIntent`.-- JavaScript: Set default output format to 'simple' in `SpeechConfig`.-- JavaScript: Removing inconsistency between properties on the config object in JavaScript and other languages.-
-**Samples**
--- Updated and fixed several samples (for example output voices for translation, etc.).-- Added Node.js samples in the [sample repository](https://aka.ms/csspeech/samples).-
-## Speech SDK 1.1.0
-
-**New Features**
--- Support for Android x86/x64.-- Proxy Support: In the `SpeechConfig` object, you can now call a function to set the proxy information (hostname, port, username, and password). This feature is not yet available on iOS.-- Improved error code and messages. If a recognition returned an error, this did already set `Reason` (in canceled event) or `CancellationDetails` (in recognition result) to `Error`. The canceled event now contains two additional members, `ErrorCode` and `ErrorDetails`. If the server returned additional error information with the reported error, it will now be available in the new members.-
-**Improvements**
--- Added additional verification in the recognizer configuration, and added additional error message.-- Improved handling of long-time silence in middle of an audio file.-- NuGet package: for .NET Framework projects, it prevents building with AnyCPU configuration.-
-**Bug fixes**
--- Fixed several exceptions found in recognizers. In addition, exceptions are caught and converted into `Canceled` event.-- Fix a memory leak in property management.-- Fixed bug in which an audio input file could crash the recognizer.-- Fixed a bug where events could be received after a session stop event.-- Fixed some race conditions in threading.-- Fixed an iOS compatibility issue that could result in a crash.-- Stability improvements for Android microphone support.-- Fixed a bug where a recognizer in JavaScript would ignore the recognition language.-- Fixed a bug preventing setting the `EndpointId` (in some cases) in JavaScript.-- Changed parameter order in AddIntent in JavaScript, and added missing `AddIntent` JavaScript signature.-
-**Samples**
--- Added C++ and C# samples for pull and push stream usage in the [sample repository](https://aka.ms/csspeech/samples).-
-## Speech SDK 1.0.1
-
-Reliability improvements and bug fixes:
--- Fixed potential fatal error due to race condition in disposing recognizer-- Fixed potential fatal error when unset properties occur.-- Added additional error and parameter checking.-- Objective-C: Fixed possible fatal error caused by name overriding in NSString.-- Objective-C: Adjusted visibility of API-- JavaScript: Fixed regarding events and their payloads.-- Documentation improvements.-
-In our [sample repository](https://aka.ms/csspeech/samples), a new sample for JavaScript was added.
-
-## Cognitive Services Speech SDK 1.0.0: 2018-September release
-
-**New features**
--- Support for Objective-C on iOS. Check out our [Objective-C quickstart for iOS](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/objectivec/ios/from-microphone).-- Support for JavaScript in browser. Check out our [JavaScript quickstart](./get-started-speech-to-text.md).-
-**Breaking changes**
--- With this release, a number of breaking changes are introduced.
- Check [this page](https://aka.ms/csspeech/breakingchanges_1_0_0) for details.
-
-## Cognitive Services Speech SDK 0.6.0: 2018-August release
-
-**New features**
--- UWP apps built with the Speech SDK now can pass the Windows App Certification Kit (WACK).
- Check out the [UWP quickstart](./get-started-speech-to-text.md?pivots=programming-language-chsarp&tabs=uwp).
-- Support for .NET Standard 2.0 on Linux (Ubuntu 16.04 x64).-- Experimental: Support Java 8 on Windows (64-bit) and Linux (Ubuntu 16.04 x64).
- Check out the [Java Runtime Environment quickstart](./get-started-speech-to-text.md?pivots=programming-language-java&tabs=jre).
-
-**Functional change**
--- Expose additional error detail information on connection errors.-
-**Breaking changes**
--- On Java (Android), the `SpeechFactory.configureNativePlatformBindingWithDefaultCertificate` function no longer requires a path parameter. Now the path is automatically detected on all supported platforms.-- The get-accessor of the property `EndpointUrl` in Java and C# was removed.-
-**Bug fixes**
--- In Java, the audio synthesis result on the translation recognizer is implemented now.-- Fixed a bug that could cause inactive threads and an increased number of open and unused sockets.-- Fixed a problem, where a long-running recognition could terminate in the middle of the transmission.-- Fixed a race condition in recognizer shutdown.-
-## Cognitive Services Speech SDK 0.5.0: 2018-July release
-
-**New features**
--- Support Android platform (API 23: Android 6.0 Marshmallow or higher). Check out the [Android quickstart](./get-started-speech-to-text.md?pivots=programming-language-java&tabs=android).-- Support .NET Standard 2.0 on Windows. Check out the [.NET Core quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnetcore).-- Experimental: Support UWP on Windows (version 1709 or later).
- - Check out the [UWP quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=uwp).
- - Note that UWP apps built with the Speech SDK do not yet pass the Windows App Certification Kit (WACK).
-- Support long-running recognition with automatic reconnection.-
-**Functional changes**
--- `StartContinuousRecognitionAsync()` supports long-running recognition.-- The recognition result contains more fields. They're offset from the audio beginning and duration (both in ticks) of the recognized text and additional values that represent recognition status, for example, `InitialSilenceTimeout` and `InitialBabbleTimeout`.-- Support AuthorizationToken for creating factory instances.-
-**Breaking changes**
--- Recognition events: `NoMatch` event type was merged into the `Error` event.-- SpeechOutputFormat in C# was renamed to `OutputFormat` to stay aligned with C++.-- The return type of some methods of the `AudioInputStream` interface changed slightly:
- - In Java, the `read` method now returns `long` instead of `int`.
- - In C#, the `Read` method now returns `uint` instead of `int`.
- - In C++, the `Read` and `GetFormat` methods now return `size_t` instead of `int`.
-- C++: Instances of audio input streams now can be passed only as a `shared_ptr`.-
-**Bug fixes**
--- Fixed incorrect return values in the result when `RecognizeAsync()` times out.-- The dependency on media foundation libraries on Windows was removed. The SDK now uses Core Audio APIs.-- Documentation fix: Added a [regions](regions.md) page to describe the supported regions.-
-**Known issue**
--- The Speech SDK for Android doesn't report speech synthesis results for translation. This issue will be fixed in the next release.-
-## Cognitive Services Speech SDK 0.4.0: 2018-June release
-
-**Functional changes**
--- AudioInputStream-
- A recognizer now can consume a stream as the audio source. For more information, see the related [how-to guide](how-to-use-audio-input-streams.md).
--- Detailed output format-
- When you create a `SpeechRecognizer`, you can request `Detailed` or `Simple` output format. The `DetailedSpeechRecognitionResult` contains a confidence score, recognized text, raw lexical form, normalized form, and normalized form with masked profanity.
-
-**Breaking change**
--- Changed to `SpeechRecognitionResult.Text` from `SpeechRecognitionResult.RecognizedText` in C#.-
-**Bug fixes**
--- Fixed a possible callback issue in the USP layer during shutdown.-- If a recognizer consumed an audio input file, it was holding on to the file handle longer than necessary.-- Removed several deadlocks between the message pump and the recognizer.-- Fire a `NoMatch` result when the response from service is timed out.-- The media foundation libraries on Windows are delay loaded. This library is required for microphone input only.-- The upload speed for audio data is limited to about twice the original audio speed.-- On Windows, C# .NET assemblies now are strong named.-- Documentation fix: `Region` is required information to create a recognizer.-
-More samples have been added and are constantly being updated. For the latest set of samples, see the [Speech SDK samples GitHub repository](https://aka.ms/csspeech/samples).
-
-## Cognitive Services Speech SDK 0.2.12733: 2018-May release
-
-This release is the first public preview release of the Cognitive Services Speech SDK.
+***
container-registry Container Registry Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-intro.md
Azure Container Registry is a managed, private Docker registry service based on
Use Azure container registries with your existing container development and deployment pipelines, or use Azure Container Registry Tasks to build container images in Azure. Build on demand, or fully automate builds with triggers such as source code commits and base image updates.
-For more about Docker and registry concepts, see the [Docker overview](https://docs.docker.com/engine/docker-overview/) and [About registries, repositories, and images](container-registry-concepts.md).
+Learn more about Docker and Registry concepts, see the [Docker overview](https://docs.docker.com/engine/docker-overview/), and [About registries, repositories, and images](container-registry-concepts.md).
## Use cases
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
As your schema evolves, and new properties are added over time, the analytical s
The following constraints are applicable on the operational data in Azure Cosmos DB when you enable analytical store to automatically infer and represent the schema correctly:
-* You can have a maximum of 1000 properties at any nesting level in the schema and a maximum nesting depth of 127.
+* You can have a maximum of 1000 properties across all nested levels in the document schema and a maximum nesting depth of 127.
* Only the first 1000 properties are represented in the analytical store. * Only the first 127 nested levels are represented in the analytical store. * The first level of a JSON document is its `/` root level.
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
description: Learn how to diagnose and fix slow requests when using Azure Cosmos
Previously updated : 06/15/2021 Last updated : 01/10/2022
try
// Log the diagnostics and add any additional info necessary to correlate to other logs Console.Write(response.Diagnostics.ToString()); }
-}catch(CosmosException cosmosException){
+}
+catch (CosmosException cosmosException)
+{
// Log the full exception including the stack trace Console.Write(cosmosException.ToString()); // The Diagnostics can be logged separately if required. Console.Write(cosmosException.Diagnostics.ToString()); }
+// When using Stream APIs
ResponseMessage response = await this.Container.CreateItemStreamAsync(partitionKey, stream);
-if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan || IsFailureStatusCode(response.StatusCode))
+if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan || !response.IsSuccessStatusCode)
{ // Log the diagnostics and add any additional info necessary to correlate to other logs Console.Write(response.Diagnostics.ToString());
The JSON structure has breaking changes with each version of the SDK. This makes
### <a name="cpu-history"></a>CPU history High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where the requests might do multiple connections for a single query.
+# [3.21 or greater SDK](#tab/cpu-new)
+
+The timeouts will contain *Diagnostics*, which contain:
+
+```json
+"systemHistory": [
+{
+"dateUtc": "2021-11-17T23:38:28.3115496Z",
+"cpu": 16.731,
+"memory": 9024120.000,
+"threadInfo": {
+"isThreadStarving": "False",
+....
+}
+
+},
+{
+"dateUtc": "2021-11-17T23:38:28.3115496Z",
+"cpu": 16.731,
+"memory": 9024120.000,
+"threadInfo": {
+"isThreadStarving": "False",
+....
+}
+
+},
+...
+]
+```
+
+* If the `cpu` values are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+
+# [Older SDK](#tab/cpu-old)
+ If the error contains `TransportException` information, it might contain also `CPU History`: ```
-CPU history:
-(2020-08-28T00:40:09.1769900Z 0.114),
-(2020-08-28T00:40:19.1763818Z 1.732),
-(2020-08-28T00:40:29.1759235Z 0.000),
-(2020-08-28T00:40:39.1763208Z 0.063),
-(2020-08-28T00:40:49.1767057Z 0.648),
-(2020-08-28T00:40:59.1689401Z 0.137),
+CPU history:
+(2020-08-28T00:40:09.1769900Z 0.114),
+(2020-08-28T00:40:19.1763818Z 1.732),
+(2020-08-28T00:40:29.1759235Z 0.000),
+(2020-08-28T00:40:39.1763208Z 0.063),
+(2020-08-28T00:40:49.1767057Z 0.648),
+(2020-08-28T00:40:59.1689401Z 0.137),
CPU count: 8) ```
-* If the CPU utilization is over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it or scale the machine to a larger resource size.
-* If the CPU measurements are not happening every 10 seconds, the gaps or measurement times indicate larger times in between measurements. In such a case, the cause is thread starvation. The solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+* If the CPU measurements are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the CPU measurements are not happening every 10 seconds (e.g., gaps or measurement times indicate larger times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+ #### Solution: The client application that uses the SDK should be scaled up or out.
databox-gateway Data Box Gateway 2105 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-gateway/data-box-gateway-2105-release-notes.md
+
+ Title: Azure Data Box Gateway 2105 release notes| Microsoft Docs
+description: Describes critical open issues and resolutions for the Azure Data Box Gateway running 2105 release.
++
+
+++ Last updated : 01/07/2022+++
+# Azure Data Box Gateway 2105 release notes
+
+The following release notes identify the critical open issues and the resolved issues for the 2105 release of Azure Data Box Gateway.
+
+The release notes are continuously updated. As critical issues that require a workaround are discovered, they are added. Before you deploy your Azure Data Box Gateway, carefully review the information in the release notes.
+
+This release corresponds to the software version:
+
+- **Data Box Gateway 2105 (1.6.1588.3220-42623-42265845)** - 4618211
+
+Update 2105 can be applied to all prior releases of Data Box Gateway.
+
+## What's new
+
+This release contains the following bug fix:
+
+- **Buffer overrun results in abrupt reboot of gateway** - This release fixes a bug that can cause a buffer overrun resulting in access of invalid memory, leading to an abrupt, unexpected reboot of the gateway device. The error can occur when a client accesses the last several bytes of a file whose data needs to be read back by the appliance from Azure, and the file size isn't a multiple of 4096 bytes.
+
+This release also contains the following updates:
+
+- All cumulative updates and .NET framework updates through April 2021.
+
+## Known issues in this release
+
+No new issues are release noted for this release. All the release noted issues have carried over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release).
+
+## Next steps
+
+- [Prepare to deploy Azure Data Box Gateway](data-box-gateway-deploy-prep.md)
defender-for-cloud Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/alerts-overview.md
Last updated 11/09/2021
Defender for Cloud generates alerts for resources deployed on your Azure, on-premises, and hybrid cloud environments.
-Security alerts are triggered by advanced detections and are available only with enhanced security features enabled. You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). A free 30-day trial is available. For pricing details in your currency of choice and according to your region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+Security alerts are triggered by advanced detections and are available only with enhanced security features enabled. You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). A free 30-day trial is available. For pricing details in your currency of choice and according to your region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## What are security alerts and security incidents?
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/alerts-reference.md
At the bottom of this page, there's a table describing the Microsoft Defender fo
| **Fileless Attack Behavior Detected**<br>(AppServices_FilelessAttackBehaviorDetection) | The memory of the process specified below contains behaviors commonly used by fileless attacks.<br>Specific behaviors include: {list of observed behaviors}<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium | | **Fileless Attack Technique Detected**<br>(AppServices_FilelessAttackTechniqueDetection) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software.<br>Specific behaviors include: {list of observed behaviors}<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High | | **Fileless Attack Toolkit Detected**<br>(AppServices_FilelessAttackToolkitDetection) | The memory of the process specified below contains a fileless attack toolkit: {ToolKitName}. Fileless attack toolkits typically do not have a presence on the filesystem, making detection by traditional anti-virus software difficult.<br>Specific behaviors include: {list of observed behaviors}<br>(Applies to: App Service on Windows and App Service on Linux) | Defense Evasion, Execution | High |
-| **Microsoft Defender for Cloud test alert for App Service (not a threat)**<br>(AppServices_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed.<br>(Applies to: App Service on Windows and App Service on Linux) | - | High |
+| **Microsoft Defender for Cloud test alert for App Service (not a threat)**<br>(AppServices_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed.<br>(Applies to: App Service on Windows and App Service on Linux) | - | High |
| **NMap scanning detected**<br>(AppServices_Nmap) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with NMAP. Attackers often use this tool for probing the web application to find vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium | | **Phishing content hosted on Azure Webapps**<br>(AppServices_PhishingContent) | URL used for phishing attack found on the Azure AppServices website. This URL was part of a phishing attack sent to Microsoft 365 customers. The content typically lures visitors into entering their corporate credentials or financial information into a legitimate looking website.<br>(Applies to: App Service on Windows and App Service on Linux) | Collection | High | | **PHP file in upload folder**<br>(AppServices_PhpInUploadFolder) | Azure App Service activity log indicates an access to a suspicious PHP page located in the upload folder.<br>This type of folder does not usually contain PHP files. The existence of this type of file might indicate an exploitation taking advantage of arbitrary file upload vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium | | **Possible Cryptocoinminer download detected**<br>(AppServices_CryptoCoinMinerDownload) | Analysis of host data has detected the download of a file normally associated with digital currency mining.<br>(Applies to: App Service on Linux) | Defense Evasion, Command and Control, Exploitation | Medium |
-| **Possible data exfiltration detected**<br>(AppServices_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised.<br>(Applies to: App Service on Linux) | Collection, Exfiltration | Medium |
+| **Possible data exfiltration detected**<br>(AppServices_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised.<br>(Applies to: App Service on Linux) | Collection, Exfiltration | Medium |
| **Potential dangling DNS record for an App Service resource detected**<br>(AppServices_PotentialDanglingDomain) | A DNS record that points to a recently deleted App Service resource (also known as "dangling DNS" entry) has been detected. This might leave you susceptible to a subdomain takeover. Subdomain takeovers enable malicious actors to redirect traffic intended for an organizationΓÇÖs domain to a site performing malicious activity. In this case, a text record with the Domain Verification ID was found. Such text records prevent subdomain takeover but we still recommend removing the dangling domain. If you leave the DNS record pointing at the subdomain youΓÇÖre at risk if anyone in your organization deletes the TXT file or record in the future.<br>(Applies to: App Service on Windows and App Service on Linux) | - | Low | | **Potential reverse shell detected**<br>(AppServices_ReverseShell) | Analysis of host data detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns.<br>(Applies to: App Service on Linux) | Exfiltration, Exploitation | Medium | | **Raw data download detected**<br>(AppServices_DownloadCodeFromWebsite) | Analysis of App Service processes detected an attempt to download code from raw-data websites such as Pastebin. This action was run by a PHP process. This behavior is associated with attempts to download web shells or other malicious components to the App Service.<br>(Applies to: App Service on Windows) | Execution | Medium |
defender-for-cloud Defender For App Service Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-app-service-introduction.md
To protect your Azure App Service plan with Microsoft Defender for App Service,
| Aspect | Details | ||:--| | Release state: | General availability (GA) |
-| Pricing: | Microsoft Defender for App Service is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)<br>Billing is according to total compute instances in all plans |
+| Pricing: | Microsoft Defender for App Service is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)<br>Billing is according to total compute instances in all plans |
| Supported App Service plans: | [All App Service plans](https://azure.microsoft.com/pricing/details/app-service/plans/) are supported except [Azure Functions on the consumption plan](../azure-functions/functions-scale.md). | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) | | | |
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-cloud-introduction.md
Use the advanced protection tiles in the [workload protections dashboard](worklo
- To get started with Defender for Cloud, you need a subscription to Microsoft Azure. If you don't have a subscription, [sign up for a free trial](https://azure.microsoft.com/free/). -- Defender for Cloud's free plan is enabled on all your current Azure subscriptions when you visit the Defender for Cloud pages in the Azure portal for the first time, or if enabled programmatically via the REST API. To take advantage of advanced security management and threat detection capabilities, you must enable the enhanced security features. These features are free for the first 30 days. [Learn more about the pricing](https://azure.microsoft.com/pricing/details/security-center/).
+- Defender for Cloud's free plan is enabled on all your current Azure subscriptions when you visit the Defender for Cloud pages in the Azure portal for the first time, or if enabled programmatically via the REST API. To take advantage of advanced security management and threat detection capabilities, you must enable the enhanced security features. These features are free for the first 30 days. [Learn more about the pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
- If you're ready to enable enhanced security features now, [Quickstart: Enable enhanced security features](enable-enhanced-security.md) walks you through the steps.
defender-for-cloud Defender For Container Registries Cicd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-container-registries-cicd.md
YouΓÇÖll get traceability information such as the GitHub workflow and the GitHub
|Aspect|Details| |-|:-| |Release state:| **This CI/CD integration is in preview.**<br>We recommend that you experiment with it on non-production workflows only.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| |||
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-container-registries-introduction.md
To protect the Azure Resource Manager based registries in your subscription, ena
|Aspect|Details| |-|:-| |Release state:|Generally available (GA)|
-|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Supported registries and images:|Linux images in ACR registries accessible from the public internet with shell access<br>[ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)| |Unsupported registries and images:|Windows images<br>'Private' registries (unless access is granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services))<br>Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images, or "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br>Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md)| |Required roles and permissions:|**Security reader** and [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md)|
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-containers-enable.md
Defender for Containers protects your clusters whether they're running in:
Learn about this plan in [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md). > [!NOTE] > Defender for Containers' support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is a preview feature. > > [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]- ::: zone pivot="defender-for-container-aks"-
-## Enable the plan
-
-1. From Defender for Cloud's menu, open the [Environment settings page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/EnvironmentSettings) and select the relevant subscription.
-
-1. In the [Defender plans page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/pricingTier), enable **Defender for Containers**
-
- > [!TIP]
- > If the subscription already has Defender for Kubernetes and/or Defender for container registries enabled, an update notice is shown. Otherwise, the only option will be **Defender for Containers**.
- >
- > :::image type="content" source="media/release-notes/defender-plans-deprecated-indicator.png" alt-text="Defender for container registries and Defender for Kubernetes plans showing 'Deprecated' and upgrade information.":::
-
-1. By default, the plan is configured to automatically defend any supported Kubernetes cluster that is attached to this subscription. To optionally modify the configuration, select *configure** from the configuration column.
-
- :::image type="content" source="media/defender-for-containers/defender-for-containers-provisioning-configuration.gif" alt-text="Viewing the configuration for Defender for Containers.":::
-
- You can also modify this configuration from the [Auto provisioning page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/dataCollection) on the **Microsoft Defender for Containers components (preview)** row:
-
- :::image type="content" source="media/defender-for-containers/auto-provisioning-defender-for-containers.png" alt-text="Screenshot of the auto provisioning options for Microsoft Defender for Containers." lightbox="./media/defender-for-containers/auto-provisioning-defender-for-containers.png":::
-
-1. If you disable the auto provisioning of any component, you can easily deploy the component to one or more clusters using the appropriate recommendation:
-
- - Policy Add-on for Kubernetes - [Azure Kubernetes Service clusters should have the Azure Policy Add-on for Kubernetes installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/08e628db-e2ed-4793-bc91-d13e684401c3)
- - Azure Kubernetes Service profile - [Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/56a83a6e-c417-42ec-b567-1e6fcb3d09a9)
- - Azure Arc-enabled Kubernetes extension - [Azure Arc-enabled Kubernetes clusters should have the Defender extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3ef9848c-c2c8-4ff3-8b9c-4c8eb8ddfce6)
--
-## Deploy the Defender profile
-
-You can enable the containers plan and deploy all of the relevant components from the Azure portal, the REST API, or with a Resource Manager template. For detailed steps, select the relevant tab.
-
-The Defender security profile is a preview feature. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
-
-### [**Azure portal**](#tab/aks-deploy-portal)
-
-### Use the fix button from the Defender for Cloud recommendation
-
-A streamlined, frictionless, process lets you use the Azure portal pages to enable the Defender for Cloud plan and setup auto provisioning of all the necessary components for defending your Kubernetes clusters at scale.
-
-A dedicated Defender for Cloud recommendation provides:
--- **Visibility** about which of your clusters has the Defender profile deployed-- **Fix** button to deploy it to those clusters without the extension-
-1. From Microsoft Defender for Cloud's recommendations page, open the **Enable enhanced security** security control.
-
-1. Use the filter to find the recommendation named **Azure Kubernetes Service clusters should have Defender profile enabled**.
-
- > [!TIP]
- > Notice the **Fix** icon in the actions column
-
-1. Select the clusters to see the details of the healthy and unhealthy resources - clusters with and without the profile.
-
-1. From the unhealthy resources list, select a cluster and select **Remediate** to open the pane with the remediation confirmation.
-
-1. Select **Fix *[x]* resources**.
--
-### [**REST API**](#tab/aks-deploy-rest)
-
-### Use the REST API to deploy the Defender profile
-
-To install the 'SecurityProfile' on an existing cluster with the REST API, run the following PUT command:
-
-```rest
-PUT https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
-```
-
-Request URI: `https://management.azure.com/subscriptions/{{SubscriptionId}}/resourcegroups/{{ResourceGroup}}/providers/Microsoft.ContainerService/managedClusters/{{ClusterName}}?api-version={{ApiVersion}}`
-
-Request query parameters:
-
-| Name | Description | Mandatory |
-|-||--|
-| SubscriptionId | Cluster's subscription ID | Yes |
-| ResourceGroup | Cluster's resource group | Yes |
-| ClusterName | Cluster's name | Yes |
-| ApiVersion | API version, must be >= 2021-07-01 | Yes |
-| | | |
-
-Request Body:
-
-```rest
-{
- "location": "{{Location}}",
- "properties": {
- "securityProfile": {
- "azureDefender": {
- "enabled": true,
- "logAnalyticsWorkspaceResourceId": "{{LAWorkspaceResourceId}}"
- }
- }
- }
-}
-```
-
-Request body parameters:
-
-| Name | Description | Mandatory |
-|--||--|
-| location | Cluster's location | Yes |
-| properties.securityProfile.azureDefender.enabled | Determines whether to enable or disable Microsoft Defender for Containers on the cluster | Yes |
-| properties.securityProfile.azureDefender.logAnalyticsWorkspaceResourceId | Log Analytics workspace Azure resource ID | Yes |
-| | | |
--
-### [**Resource Manager**](#tab/aks-deploy-arm)
-
-### Use Azure Resource Manager to deploy the Defender profile
-
-To use Azure Resource Manager to deploy the Defender profile, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces).
-
-> [!TIP]
-> If you're new to Resource Manager templates, start here: [What are Azure Resource Manager templates?](../azure-resource-manager/templates/overview.md)
-
-To install the 'SecurityProfile' on an existing cluster with Resource
-
-```
-{
- "type": "Microsoft.ContainerService/managedClusters",
- "apiVersion": "2021-07-01",
- "name": "string",
- "location": "string",
- "properties": {
- …
- "securityProfile": {
- "azureDefender": {
- "enabled": true,
- "logAnalyticsWorkspaceResourceId": ΓÇ£logAnalyticsWorkspaceResourceId "
- }
- },
- }
-}
-```
::: zone-end - ::: zone pivot="defender-for-container-arc"-
-## Enable the plan
-
-1. From Defender for Cloud's menu, open the [Environment settings page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/EnvironmentSettings) and select the relevant subscription.
-
-1. In the [Defender plans page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/pricingTier), enable **Defender for Containers**
-
- > [!TIP]
- > If the subscription already has Defender for Kubernetes and/or Defender for container registries enabled, an update notice is shown. Otherwise, the only option will be **Defender for Containers**.
- >
- > :::image type="content" source="media/release-notes/defender-plans-deprecated-indicator.png" alt-text="Defender for container registries and Defender for Kubernetes plans showing 'Deprecated' and upgrade information.":::
-
-1. By default, the plan is configured to automatically defend any supported Kubernetes cluster that is attached to this subscription. To optionally modify the configuration, select *configure** from the configuration column.
-
- :::image type="content" source="media/defender-for-containers/defender-for-containers-provisioning-configuration.gif" alt-text="Viewing the configuration for Defender for Containers.":::
-
- You can also modify this configuration from the [Auto provisioning page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/dataCollection) on the **Microsoft Defender for Containers components (preview)** row:
-
- :::image type="content" source="media/defender-for-containers/auto-provisioning-defender-for-containers.png" alt-text="Screenshot of the auto provisioning options for Microsoft Defender for Containers." lightbox="./media/defender-for-containers/auto-provisioning-defender-for-containers.png":::
-
-1. If you disable the auto provisioning of any component, you can easily deploy the component to one or more clusters using the appropriate recommendation:
-
- - Policy Add-on for Kubernetes - [Azure Kubernetes Service clusters should have the Azure Policy Add-on for Kubernetes installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/08e628db-e2ed-4793-bc91-d13e684401c3)
- - Azure Kubernetes Service profile - [Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/56a83a6e-c417-42ec-b567-1e6fcb3d09a9)
- - Azure Arc-enabled Kubernetes extension - [Azure Arc-enabled Kubernetes clusters should have the Defender extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3ef9848c-c2c8-4ff3-8b9c-4c8eb8ddfce6)
-
-## Prerequisites
-
-Before deploying the extension, ensure you:
-- [Connect the Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md)-- Complete the [pre-requisites listed under the generic cluster extensions documentation](../azure-arc/kubernetes/extensions.md#prerequisites).-- Configure **port 443** on the following endpoints for outbound access:
- - For clusters on Azure Government cloud:
- - *.ods.opinsights.azure.us
- - *.oms.opinsights.azure.us
- - :::no-loc text="login.microsoftonline.us":::
- - For clusters on other Azure cloud deployments:
- - *.ods.opinsights.azure.com
- - *.oms.opinsights.azure.com
- - :::no-loc text="login.microsoftonline.com":::
-
-## Deploy the Defender extension
-
-You can deploy the Defender extension using a range of methods. For detailed steps, select the relevant tab.
-
-### [**Azure portal**](#tab/k8s-deploy-asc)
-
-### Use the fix button from the Defender for Cloud recommendation
-
-A dedicated Defender for Cloud recommendation provides:
--- **Visibility** about which of your clusters has the Defender for Kubernetes extension deployed-- **Fix** button to deploy it to those clusters without the extension-
-1. From Microsoft Defender for Cloud's recommendations page, open the **Enable enhanced security** security control.
-
-1. Use the filter to find the recommendation named **Azure Arc-enabled Kubernetes clusters should have Defender for Cloud's extension installed**.
-
- :::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-recommendation.png" alt-text="Microsoft Defender for Cloud's recommendation for deploying the Defender extension for Azure Arc-enabled Kubernetes clusters." lightbox="media/defender-for-kubernetes-azure-arc/extension-recommendation.png":::
-
- > [!TIP]
- > Notice the **Fix** icon in the actions column
-
-1. Select the extension to see the details of the healthy and unhealthy resources - clusters with and without the extension.
-
-1. From the unhealthy resources list, select a cluster and select **Remediate** to open the pane with the remediation options.
-
-1. Select the relevant Log Analytics workspace and select **Remediate x resource**.
-
- :::image type="content" source="media/defender-for-kubernetes-azure-arc/security-center-deploy-extension.gif" alt-text="Deploy Defender extension for Azure Arc with Defender for Cloud's 'fix' option.":::
--
-### [**Azure CLI**](#tab/k8s-deploy-cli)
-
-### Use Azure CLI to deploy the Defender extension
-
-1. Log in to Azure:
-
- ```azurecli
- az login
- az account set --subscription <your-subscription-id>
- ```
-
- > [!IMPORTANT]
- > Ensure that you use the same subscription ID for ``<your-subscription-id>`` as the one that was used when connecting your cluster to Azure Arc.
-
-1. Run the following command to deploy the extension on top of your Azure Arc-enabled Kubernetes cluster:
-
- ```azurecli
- az k8s-extension create --name microsoft.azuredefender.kubernetes --cluster-type connectedClusters --cluster-name <cluster-name> --resource-group <resource-group> --extension-type microsoft.azuredefender.kubernetes
- ```
-
- A description of all the supported configuration settings on the Defender extension type is given below:
-
- | Property | Description |
- |-|-|
- | logAnalyticsWorkspaceResourceID | **Optional**. Full resource ID of your own Log Analytics workspace.<br>When not provided, the default workspace of the region will be used.<br><br>To get the full resource ID, run the following command to display the list of workspaces in your subscriptions in the default JSON format:<br>```az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json```<br><br>The Log Analytics workspace resource ID has the following syntax:<br>/subscriptions/{your-subscription-id}/resourceGroups/{your-resource-group}/providers/Microsoft.OperationalInsights/workspaces/{your-workspace-name}. <br>Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces) |
- | auditLogPath |**Optional**. The full path to the audit log files.<br>When not provided, the default path ``/var/log/kube-apiserver/audit.log`` will be used.<br>For AKS Engine, the standard path is ``/var/log/kubeaudit/audit.log`` |
-
- The below command shows an example usage of all optional fields:
-
- ```azurecli
- az k8s-extension create --name microsoft.azuredefender.kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --extension-type microsoft.azuredefender.kubernetes --configuration-settings logAnalyticsWorkspaceResourceID=<log-analytics-workspace-resource-id> auditLogPath=<your-auditlog-path>
- ```
-
-### [**Resource Manager**](#tab/k8s-deploy-resource-manager)
-
-### Use Azure Resource Manager to deploy the Defender extension
-
-To use Azure Resource Manager to deploy the Defender extension, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces).
-
-You can use the **azure-defender-extension-arm-template.json** Resource Manager template from Defender for Cloud's [installation examples](https://aka.ms/kubernetes-extension-installation-examples).
-
-> [!TIP]
-> If you're new to Resource Manager templates, start here: [What are Azure Resource Manager templates?](../azure-resource-manager/templates/overview.md)
-
-### [**REST API**](#tab/k8s-deploy-api)
-
-### Use REST API to deploy the Defender extension
-
-To use the REST API to deploy the Defender extension, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces).
-
-> [!TIP]
-> The simplest way to use the API to deploy the Defender extension is with the supplied **Postman Collection JSON** example from Defender for Cloud's [installation examples](https://aka.ms/kubernetes-extension-installation-examples).
-- To modify the Postman Collection JSON, or to manually deploy the extension with the REST API, run the following PUT command:-
- ```rest
- PUT https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
- ```
-
- Where:
-
- | Name | In | Required | Type | Description |
- |--||-|--|-|
- | Subscription ID | Path | True | String | Your Azure Arc-enabled Kubernetes resource's subscription ID |
- |Resource Group | Path | True | String | Name of the resource group containing your Azure Arc-enabled Kubernetes resource |
- | Cluster Name | Path | True | String | Name of your Azure Arc-enabled Kubernetes resource |
- | | | | | |
--
- For **Authentication**, your header must have a Bearer token (as with other Azure APIs). To get a bearer token, run the following command:
-
- `az account get-access-token --subscription <your-subscription-id>`
- Use the following structure for the body of your message:
- ```json
- {
- "properties": {
- "extensionType": "microsoft.azuredefender.kubernetes",
- "con figurationSettings":ΓÇ»{
- "logAnalytics.workspaceId":"YOUR-WORKSPACE-ID"
- // , "auditLogPath":"PATH/TO/AUDITLOG"
- },
- "configurationProtectedSettings": {
- "logAnalytics.key":"YOUR-WORKSPACE-KEY"
- }
- }
- }
- ```
-
- Description of the properties is given below:
-
- | Property | Description |
- | -- | -- |
- | logAnalytics.workspaceId | Workspace ID of the Log Analytics resource |
- | logAnalytics.key | Key of the Log Analytics resource |
- | auditLogPath | **Optional**. The full path to the audit log files. The default value is ``/var/log/kube-apiserver/audit.log`` |
---
-## Verify the deployment
-
-To verify that your cluster has the Defender extension installed on it, follow the steps in one of the tabs below:
-
-### [**Azure portal - Defender for Cloud**](#tab/k8s-verify-asc)
-
-### Use Defender for Cloud recommendation to verify the status of your extension
-
-1. From Microsoft Defender for Cloud's recommendations page, open the **Enable Microsoft Defender for Cloud** security control.
-
-1. Select the recommendation named **Azure Arc-enabled Kubernetes clusters should have Microsoft Defender for Cloud's extension installed**.
-
- :::image type="content" source="./media/defender-for-kubernetes-azure-arc/extension-recommendation.png" alt-text="Microsoft Defender for Cloud's recommendation for deploying the Defender extension for Azure Arc-enabled Kubernetes clusters." lightbox="media/defender-for-kubernetes-azure-arc/extension-recommendation.png":::
-
-1. Check that the cluster on which you deployed the extension is listed as **Healthy**.
--
-### [**Azure portal - Azure Arc**](#tab/k8s-verify-arc)
-
-### Use the Azure Arc pages to verify the status of your extension
-
-1. From the Azure portal, open **Azure Arc**.
-1. From the infrastructure list, select **Kubernetes clusters** and then select the specific cluster.
-1. Open the extensions page. The extensions on the cluster are listed. To confirm whether the Defender extension was installed correctly, check the **Install status** column.
-
- :::image type="content" source="./media/defender-for-kubernetes-azure-arc/extension-installed-clusters-page.png" alt-text="Azure Arc page for checking the status of all installed extensions on a Kubernetes cluster." lightbox="media/defender-for-kubernetes-azure-arc/extension-installed-clusters-page.png":::
-
-1. For more details, select the extension.
-
- :::image type="content" source="./media/defender-for-kubernetes-azure-arc/extension-details-page.png" alt-text="Full details of an Azure Arc extension on a Kubernetes cluster.":::
--
-### [**Azure CLI**](#tab/k8s-verify-cli)
-
-### Use Azure CLI to verify that the extension is deployed
-
-1. Run the following command on Azure CLI:
-
- ```azurecli
- az k8s-extension show --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --name microsoft.azuredefender.kubernetes
- ```
-
-1. In the response, look for "extensionType": "microsoft.azuredefender.kubernetes" and "installState": "Installed".
-
- > [!NOTE]
- > It might show "installState": "Pending" for the first few minutes.
-
-1. If the state shows **Installed**, run the following command on your machine with the `kubeconfig` file pointed to your cluster to check that a pod called "azuredefender-XXXXX" is in 'Running' state:
-
- ```console
- kubectl get pods -n azuredefender
- ```
-
-### [**REST API**](#tab/k8s-verify-api)
-
-### Use the REST API to verify that the extension is deployed
-
-To confirm a successful deployment, or to validate the status of your extension at any time:
-
-1. Run the following GET command:
-
- ```rest
- GET https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
- ```
-
-1. In the response, look in "extensionType": "microsoft.azuredefender.kubernetes" for "installState": "Installed".
-
- > [!TIP]
- > It might show "installState": "Pending" for the first few minutes.
-
-1. If the state shows **Installed**, run the following command on your machine with the `kubeconfig` file pointed to your cluster to check that a pod called "azuredefender-XXXXX" is in 'Running' state:
-
- ```console
- kubectl get pods -n azuredefender
- ```
---- ::: zone-end -- ::: zone pivot="defender-for-container-eks"-
-## Protect Amazon Elastic Kubernetes Service clusters
-
-> [!IMPORTANT]
-> If you haven't already connected an AWS account, do so now using the instructions in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) and skip to step 3 below.
-
-To protect your EKS clusters, enable the Containers plan on the relevant account connector:
-
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the AWS connector.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/select-aws-connector.png" alt-text="Screenshot of Defender for Cloud's environment settings page showing an AWS connector.":::
-
-1. Set the toggle for the **Containers** plan to **On**.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/enable-containers-plan-on-aws-connector.png" alt-text="Screenshot of enabling Defender for Containers for an AWS connector.":::
-
-1. Optionally, to change the retention period for your audit logs, select **Configure**, enter the required timeframe, and select **Save**.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/adjust-eks-logs-retention.png" alt-text="Screenshot of adjusting the retention period for EKS control pane logs." lightbox="./media/defender-for-kubernetes-intro/adjust-eks-logs-retention.png":::
-
-1. Continue through the remaining pages of the connector wizard.
-
-1. Azure Arc-enabled Kubernetes and the Defender extension should be installed and running on your EKS clusters. A dedicated Defender for Cloud recommendation deploys the extension (and Arc if necessary):
-
- 1. From Defender for Cloud's **Recommendations** page, search for **EKS clusters should have Azure Defender's extension for Azure Arc installed**.
- 1. Select an unhealthy cluster.
-
- > [!IMPORTANT]
- > You must select the clusters one at a time.
- >
- > Don't select the clusters by their hyperlinked names: select anywhere else in the relevant row.
-
- 1. Select **Fix**.
- 1. Defender for Cloud generates a script in the language of your choice: select Bash (for Linux) or PowerShell (for Windows).
- 1. Select **Download remediation logic**.
- 1. Run the generated script on your cluster.
-
- :::image type="content" source="./media/defender-for-kubernetes-intro/generate-script-defender-extension-kubernetes.gif" alt-text="Video of how to use the Defender for Cloud recommendation to generate a script for your EKS clusters that enables the Azure Arc extension. ":::
-
-### View recommendations and alerts for your EKS clusters
-
-> [!TIP]
-> You can simulate container alerts by following the instructions in [this blog post](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-demonstrate-the-new-containers-features-in-azure-security/ba-p/1011270).
-
-To view the alerts and recommendations for your EKS clusters, use the filters on the alerts, recommendations, and inventory pages to filter by resource type **AWS EKS cluster**.
--- ::: zone-end - ## Simulate security alerts from Microsoft Defender for Containers A full list of supported alerts is available in the [reference table of all Defender for Cloud security alerts](alerts-reference.md#alerts-k8scluster).
A full list of supported alerts is available in the [reference table of all Defe
:::image type="content" source="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png" alt-text="Sample alert from Microsoft Defender for Kubernetes." lightbox="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png"::: -------
-## Remove the Defender extension
-
-You can remove the extension using Azure portal, Azure CLI, or REST API as explained in the tabs below.
-
-### [**Azure portal - Arc**](#tab/k8s-remove-arc)
-
-### Use Azure portal to remove the extension
-
-1. From the Azure portal, open Azure Arc.
-1. From the infrastructure list, select **Kubernetes clusters** and then select the specific cluster.
-1. Open the extensions page. The extensions on the cluster are listed.
-1. Select the cluster and select **Uninstall**.
-
- :::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-uninstall-clusters-page.png" alt-text="Removing an extension from your Arc-enabled Kubernetes cluster." lightbox="media/defender-for-kubernetes-azure-arc/extension-uninstall-clusters-page.png":::
-
-### [**Azure CLI**](#tab/k8s-remove-cli)
-
-### Use Azure CLI to remove the Defender extension
-
-1. Remove the Microsoft Defender for Kubernetes Arc extension with the following commands:
-
- ```azurecli
- az login
- az account set --subscription <subscription-id>
- az k8s-extension delete --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --name microsoft.azuredefender.kubernetes --yes
- ```
-
- Removing the extension may take a few minutes. We recommend you wait before you try to verify that it was successful.
-
-1. To verify that the extension was successfully removed, run the following commands:
-
- ```azurecli
- az k8s-extension show --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --name microsoft.azuredefender.kubernetes
- ```
-
- There should be no delay in the extension resource getting deleted from Azure Resource Manager. After that, validate that there are no pods called "azuredefender-XXXXX" on the cluster by running the following command with the `kubeconfig` file pointed to your cluster:
-
- ```console
- kubectl get pods -n azuredefender
- ```
-
- It might take a few minutes for the pods to be deleted.
-
-### [**REST API**](#tab/k8s-remove-api)
-
-### Use REST API to remove the Defender extension
-
-To remove the extension using the REST API, run the following DELETE command:
-
-```rest
-DELETE https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
-```
-
-| Name | In | Required | Type | Description |
-|--||-|--|-|
-| Subscription ID | Path | True | String | Your Azure Arc-enabled Kubernetes cluster's subscription ID |
-| Resource Group | Path | True | String | Your Azure Arc-enabled Kubernetes cluster's resource group |
-| Cluster Name | Path | True | String | Your Azure Arc-enabled Kubernetes cluster's name |
-||||||
-
-For **Authentication**, your header must have a Bearer token (as with other Azure APIs). To get a bearer token, run the following command:
-
-```azurecli
-az account get-access-token --subscription <your-subscription-id>
-```
-
-The request may take several minutes to complete.
--- ::: zone-end ---------- ::: zone pivot="defender-for-container-aks"-
-## Remove the Defender profile
-
-You can remove the profile using the REST API or a Resource Manager template as explained in the tabs below.
-
-### [**REST API**](#tab/aks-removeprofile-api)
-
-### Use REST API to remove the Defender profile from AKS
-
-To remove the profile using the REST API, run the following PUT command:
-
-```rest
-https://management.azure.com/subscriptions/{{SubscriptionId}}/resourcegroups/{{ResourceGroup}}/providers/Microsoft.ContainerService/managedClusters/{{ClusterName}}?api-version={{ApiVersion}}
-```
-
-| Name | Description | Mandatory |
-|-||--|
-| SubscriptionId | Cluster's subscription ID | Yes |
-| ResourceGroup | Cluster's resource group | Yes |
-| ClusterName | Cluster's name | Yes |
-| ApiVersion | API version, must be >= 2021-07-01 | Yes |
-| | | |
-
-Request body:
-
-```rest
-{
- "location": "{{Location}}",
- "properties": {
- "securityProfile": {
- "azureDefender": {
- "enabled": false
- }
- }
- }
-}
-```
-
-Request body parameters:
-
-| Name | Description | Mandatory |
-|--||--|
-| location | Cluster's location | Yes |
-| properties.securityProfile.azureDefender.enabled | Determines whether to enable or disable Microsoft Defender for Containers on the cluster | Yes |
-| | | |
--
-### [**Resource Manager**](#tab/aks-removeprofile-resource-manager)
-
-### Use Azure Resource Manager to remove the Defender profile from AKS
-
-To use Azure Resource Manager to remove the Defender profile, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces).
-
-> [!TIP]
-> If you're new to Resource Manager templates, start here: [What are Azure Resource Manager templates?](../azure-resource-manager/templates/overview.md)
-
-The relevant template and parameters to remove the Defender profile from AKS are:
-
-```
-{
- "type": "Microsoft.ContainerService/managedClusters",
- "apiVersion": "2021-07-01",
- "name": "string",
- "location": "string",
- "properties": {
- …
- "securityProfile": {
- "azureDefender": {
- "enabled": false
- }
- },
- }
-}
-```
---
-## Remove the Defender extension
-
-If you've tried the Defender extension and decided not to use it, or you're troubleshooting a problem and need to uninstall then reinstall, follow the procedure described in [Remove the add-on](../governance/policy/concepts/policy-for-kubernetes.md#remove-the-add-on).
--- ::: zone-end-
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-containers-introduction.md
On this page, you'll learn how how you can use Defender for Containers to improv
| Aspect | Details | ||:| | Release state: | General availability (GA)<br>Where indicated, specific features are in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] |
-| Pricing: | **Microsoft Defender for Containers** is free for the month of December 2021. After that, it will be billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/) (which will be updated at the end of December 2021) |
-| Registries and images: | **Supported**<br> ΓÇó Linux images in Azure Container Registry (ACR) registries accessible from the public internet with shell access<br> ΓÇó Private registries with access granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services)<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)<br><br>**Unsupported**<br> ΓÇó Windows images<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md)<br> ΓÇó Nodes with taints applied |
-| Kubernetes distributions: | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br><br>**Tested on**<br> ΓÇó [Azure Kubernetes Service](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
+| Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
+| Registries and images: | **Supported**<br> ΓÇó Linux images in Azure Container Registry (ACR) registries accessible from the public internet with shell access<br> ΓÇó Private registries with access granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services)<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)<br><br>**Unsupported**<br> ΓÇó Windows images<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
+| Kubernetes distributions and configurations: | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br><br>**Tested on**<br> ΓÇó [Azure Kubernetes Service](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
| Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) (Except for preview features)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview) | | | |
defender-for-cloud Defender For Databases Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-databases-introduction.md
Defender for Cloud detects anomalous activities indicating unusual and potential
| Aspect | Details | ||:-| | Release state: | General availability (GA) |
-| Pricing: | **Microsoft Defender for open-source relational databases** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/) |
+| Pricing: | **Microsoft Defender for open-source relational databases** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
| Protected versions of PostgreSQL: | Single Server - General Purpose and Memory Optimized. Learn more in [PostgreSQL pricing tiers](../postgresql/concepts-pricing-tiers.md). | | Protected versions of MySQL: | Single Server - General Purpose and Memory Optimized. Learn more in [MySQL pricing tiers](../mysql/concepts-pricing-tiers.md). | | Protected versions of MariaDB: | General Purpose and Memory Optimized. Learn more in [MariaDB pricing tiers](../mariadb/concepts-pricing-tiers.md). |
defender-for-cloud Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-dns-introduction.md
From within Azure DNS, Defender for DNS monitors the queries from these resource
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for DNS** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for DNS** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government| |||
defender-for-cloud Defender For Key Vault Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-key-vault-introduction.md
Enable **Microsoft Defender for Key Vault** for Azure-native, advanced threat pr
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for Key Vault** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for Key Vault** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| |||
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Host-level threat detection for your Linux AKS nodes is available if you enable
|Aspect|Details| |-|:-| |Release state:|General availability (GA)<br>Protections for EKS clusters are preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|**Microsoft Defender for Kubernetes** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).<br>**Containers plan** for EKS clusters in connected AWS accounts is free while it's in preview.|
+|Pricing:|**Microsoft Defender for Kubernetes** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).<br>**Containers plan** for EKS clusters in connected AWS accounts is free while it's in preview.|
|Required roles and permissions:|**Security admin** can dismiss alerts.<br>**Security reader** can view findings.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview)| |||
defender-for-cloud Defender For Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-resource-manager-introduction.md
Microsoft Defender for Resource Manager automatically monitors the resource mana
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for Resource Manager** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for Resource Manager** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet| |||
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-sql-introduction.md
Microsoft Defender for SQL includes two Microsoft Defender plans that extend Mic
|Aspect|Details| |-|:-| |Release state:|**Microsoft Defender for Azure SQL database servers** - Generally available (GA)<br>**Microsoft Defender for SQL servers on machines** - Generally available (GA) |
-|Pricing:|The two plans that form **Microsoft Defender for SQL** are billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|The two plans that form **Microsoft Defender for SQL** are billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Protected SQL versions:|[SQL on Azure virtual machines](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md)<br>[SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>On-premises SQL servers on Windows machines without Azure Arc<br>Azure SQL [single databases](../azure-sql/database/single-database-overview.md) and [elastic pools](../azure-sql/database/elastic-pool-overview.md)<br>[Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md)<br>[Azure Synapse Analytics (formerly SQL DW) dedicated SQL pool](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet (**Partial**: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.)| |||
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-sql-usage.md
You'll see alerts when there are suspicious database activities, potential vulne
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Protected SQL versions:|SQL Server (versions currently [supported by Microsoft](/mem/configmgr/core/plan-design/configs/support-for-sql-server-versions))| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet| |||
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-storage-introduction.md
Security alerts are triggered when anomalous activities occur. These alerts appe
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Protected storage types:|[Blob Storage](https://azure.microsoft.com/services/storage/blobs/)<br>[Azure Files](../storage/files/storage-files-introduction.md)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet| |||
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/enable-data-collection.md
Title: Auto-deploy agents for Microsoft Defender for Cloud | Microsoft Docs description: This article describes how to set up auto provisioning of the Log Analytics agent and other agents and extensions used by Microsoft Defender for Cloud Previously updated : 12/26/2021 Last updated : 01/10/2022 # Configure auto provisioning for agents and extensions from Microsoft Defender for Cloud
Microsoft Defender for Cloud collects data from your resources using the relevant agent or extensions for that resource and the type of data collection you've enabled. Use the procedures below to ensure your resources have the necessary agents and extensions used by Defender for Cloud. +
+> [!NOTE]
+> When you enable auto provisioning of any of the supported extensions, you'll potentially impact *existing* and *future* machines. But when you **disable** auto provisioning for an extension, you'll only affect the *future* machines: nothing is uninstalled by disabling auto provisioning.
+ ## Prerequisites To get started with Defender for Cloud, you must have a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/). ## Availability
-| Aspect | Details |
-||:--|
-| Release state: | **Feature**: Auto provisioning is generally available (GA)<br>**Agent and extensions**: Log Analytics agent for Azure VMs is GA, Microsoft Dependency agent is in preview, Policy Add-on for Kubernetes is GA, Guest Configuration agent is preview |
-| Pricing: | Free |
-| Required roles and permissions: | [Owner](../role-based-access-control/built-in-roles.md#owner) to enable auto provisioning of the Log Analytics agent|
-| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines<br>:::image type="icon" source="./media/icons/no-icon.png"::: Kubernetes nodes<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual Machine Scale Sets |
-| Clouds: | **Feature**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet<br>**Agent and extensions**:<br>Log Analytics agent for Azure VMs is available on all clouds, Policy Add-on for Kubernetes is available on all clouds, Guest Configuration agent is only available on commercial clouds |
-| | |
+### [**Auto provisioning**](#tab/autoprovision-feature)
+
+This table shows the availability details for the auto provisioning **feature** itself.
+
+| Aspect | Details |
+||:|
+| Release state: | Auto provisioning is generally available (GA) |
+| Pricing: | Auto provisioning is free to use |
+| Required roles and permissions: | Depends on the specific extension - see relevant tab |
+| Supported destinations: | Depends on the specific extension - see relevant tab |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet |
++
+### [**Log Analytics agent**](#tab/autoprovision-loganalytic)
+
+| Aspect | Azure virtual machines | Azure Arc-enabled machines |
+||:|:--|
+| Release state: | Generally available (GA) | Preview |
+| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) | [Microsoft Defender for servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) |
+| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines |
+| Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+
+### [**Vulnerability assessment**](#tab/autoprovision-va)
+
+| Aspect | Details |
+||:--|
+| Release state: | Generally available (GA) |
+| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines |
+| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+
+### [**Defender for Endpoint**](#tab/autoprovision-defendpoint)
+
+| Aspect | Linux | Windows |
+||:--|:-|
+| Release state: | Generally available (GA) | Generally available (GA) |
+| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md) | [Microsoft Defender for servers](defender-for-servers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) |
+| Supported destinations: | :::image type="icon" source="./medi), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops (EVD)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 10 (other than EVD or WVD) |
+| Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/no-icon.png"::: No |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet |
++
+### [**Guest Configuration**](#tab/autoprovision-guestconfig)
+
+| Aspect | Details |
+||:--|
+| Release state: | Preview |
+| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+
+### [**Defender for Containers**](#tab/autoprovision-containers)
+
+This table shows the availability details for the various components that can be auto provisioned to provide the protections offered by [Microsoft Defender for Containers](defender-for-containers-introduction.md).
+
+| Aspect | Azure Kubernetes Service clusters | Azure Arc-enabled Kubernetes clusters |
+||-||
+| Release state: | ΓÇó Defender profile is in preview<br> ΓÇó Azure Policy add-on is generally available (GA) | ΓÇó Defender extension is in preview<br> ΓÇó Azure Policy extension for Azure Arc is in preview |
+| Relevant Defender plan: | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | [Microsoft Defender for Containers](defender-for-containers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers | Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters |
+| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
+| Clouds: | **Defender profile**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy add-on**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet|**Defender extension**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy extension for Azure Arc**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet|
+++
+> [!TIP]
+> For items marked in preview: [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
## How does Defender for Cloud collect data?
Selecting a data collection tier in Microsoft Defender for Cloud only affects th
### Requirements The enhanced security protections of Defender for Cloud are required for storing Windows security event data. Learn more about [the enhanced protection plans](defender-for-cloud-introduction.md).
-Storing data in Log Analytics might incur additional charges for data storage. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+Storing data in Log Analytics might incur additional charges for data storage. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
### Information for Microsoft Sentinel users Users of Microsoft Sentinel: note that security events collection within the context of a single workspace can be configured from either Microsoft Defender for Cloud or Microsoft Sentinel, but not both. If you're planning to add Microsoft Sentinel to a workspace that is already getting alerts from Microsoft Defender for Cloud, and is set to collect Security Events, you have two options:
To turn off automatic provisioning of an agent:
## Next steps
-This page explained how to enable auto provisioning for the Log Analytics agent and other Defender for Cloud extensions. It also described how to define a Log Analytics workspace in which to store the collected data. Both operations are required to enable data collection. Storing data in Log Analytics, whether you use a new or existing workspace, might incur more charges for data storage. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+This page explained how to enable auto provisioning for the Log Analytics agent and other Defender for Cloud extensions. It also described how to define a Log Analytics workspace in which to store the collected data. Both operations are required to enable data collection. Storing data in Log Analytics, whether you use a new or existing workspace, might incur more charges for data storage. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/enable-enhanced-security.md
For the purpose of the Defender for Cloud quickstarts and tutorials you must ena
You can protect an entire Azure subscription with Defender for Cloud's enhanced security features and the protections will be inherited by all resources within the subscription.
-A free 30-day trial is available. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+A free 30-day trial is available. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Enable enhanced security features from the Azure portal
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/enhanced-security-features-overview.md
Last updated 11/14/2021
The enhanced security features are free for the first 30 days. At the end of 30 days, if you decide to continue using the service, we'll automatically start charging for usage.
-You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
:::image type="content" source="media/enhanced-security-features-overview/defender-plans-top.png" alt-text="Enabling Microsoft Defender for Cloud's enhanced security features.":::
If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud
This article explained Defender for Cloud's pricing options. For related material, see: - [How to optimize your Azure workload costs](https://azure.microsoft.com/blog/how-to-optimize-your-azure-workload-costs/)-- [Pricing details according to currency or region](https://azure.microsoft.com/pricing/details/security-center/)
+- [Pricing details according to currency or region](https://azure.microsoft.com/pricing/details/defender-for-cloud/)
- You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. Use [solution targeting](../azure-monitor/insights/solution-targeting.md) to apply a scope to the solution and target a subset of computers in the workspace. If you're using solution targeting, Defender for Cloud lists the workspace as not having a solution.
defender-for-cloud Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/get-started.md
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Defender for Cloud provides unified security management and threat protection across your hybrid and multi-cloud workloads. While the free features offer limited security for your Azure resources only, enabling enhanced security features extends these capabilities to on-premises and other clouds. Defender for Cloud helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. You can try the enhanced security features at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+Defender for Cloud provides unified security management and threat protection across your hybrid and multi-cloud workloads. While the free features offer limited security for your Azure resources only, enabling enhanced security features extends these capabilities to on-premises and other clouds. Defender for Cloud helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. You can try the enhanced security features at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
This quickstart section will walk you through all the recommended steps to enable Microsoft Defender for Cloud and the enhanced security features. When you've completed all the quickstart steps, you'll have:
defender-for-cloud Investigate Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/investigate-resource-health.md
Last updated 11/09/2021
> The resource health page described in this tutorial is a preview release. > [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] |
-|Pricing:|**Microsoft Defender for DNS** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+|Pricing:|**Microsoft Defender for DNS** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
The resource health page provides a snapshot view of the overall health of a single resource. You can review detailed information about the resource and all recommendations that apply to that resource. Also, if you're using any of the [advanced protection plans of Microsoft Defender for Cloud](defender-for-cloud-introduction.md), you can see outstanding security alerts for that specific resource too.
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/partner-integration.md
Defender for Cloud makes it easy to enable integrated security solutions in Azur
- **Integrated detections**: Security events from partner solutions are automatically collected, aggregated, and displayed as part of Defender for Cloud alerts and incidents. These events also are fused with detections from other sources to provide advanced threat-detection capabilities. - **Unified health monitoring and management**: Customers can use integrated health events to monitor all partner solutions at a glance. Basic management is available, with easy access to advanced setup by using the partner solution.
-Currently, integrated security solutions include vulnerability assessment by [Qualys](https://www.qualys.com/public-cloud/#azure) and [Rapid7](https://www.rapid7.com/products/insightvm/) and [Microsoft Azure Web Application Firewall on Azure Application Gateway](../web-application-firewall/ag/ag-overview.md).
+Currently, integrated security solutions include vulnerability assessment by [Qualys](https://www.qualys.com/public-cloud/#azure) and [Rapid7](https://www.rapid7.com/products/insightvm/).
> [!NOTE] > Defender for Cloud does not install the Log Analytics agent on partner virtual appliances because most security vendors prohibit external agents running on their appliances.
To learn more about the integration of vulnerability scanning tools from Qualys,
Defender for Cloud also offers vulnerability analysis for your:
-* SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
+- SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
* Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md) ## How security solutions are integrated
defender-for-cloud Security Center Readiness Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/security-center-readiness-roadmap.md
Articles
* [Defender for Cloud Documentation Page](./index.yml) * [Defender for Cloud REST API Documentation Page](/previous-versions/azure/reference/mt704034(v=azure.100)) * [Defender for Cloud frequently asked questions (FAQ)](./faq-general.yml)
-* [Pricing page](https://azure.microsoft.com/pricing/details/security-center/)
+* [Pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)
* [Identity security best practices](../security/fundamentals/identity-management-best-practices.md) * [Network security best practices](../security/fundamentals/network-best-practices.md) * [PaaS recommendations](../security/fundamentals/paas-deployments.md)
defender-for-cloud Supported Machines Endpoint Solutions Clouds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds.md
The two **tabs** below show the features of Microsoft Defender for Cloud that ar
> [!TIP]
->To experiment with features that are only available with enhanced security features enabled, you can enroll in a 30-day trial. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+>To experiment with features that are only available with enhanced security features enabled, you can enroll in a 30-day trial. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Supported endpoint protection solutions <a name="endpoint-supported"></a>
defender-for-cloud Tutorial Security Incident https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/tutorial-security-incident.md
In this tutorial, you will learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. ## Prerequisites
-To step through the features covered in this tutorial, you must have Defender for Cloud's enhanced security features enabled. You can try these at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/). The quickstart [Get started with Defender for Cloud](get-started.md) walks you through how to upgrade.
+To step through the features covered in this tutorial, you must have Defender for Cloud's enhanced security features enabled. You can try these at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). The quickstart [Get started with Defender for Cloud](get-started.md) walks you through how to upgrade.
## Triage security alerts
defender-for-cloud Windows Admin Center Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/windows-admin-center-integration.md
Windows Admin Center is a management tool for your Windows servers. It's a singl
> [!NOTE] > Your Azure subscription and the associated Log Analytics workspace both need to have Microsoft Defender for Cloud's enhanced security features enabled in order to enable the Windows Admin Center integration.
-> Enhanced security features are free for the first 30 days if you haven't previously used it on the subscription and workspace. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+> Enhanced security features are free for the first 30 days if you haven't previously used it on the subscription and workspace. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
> When you've successfully onboarded a server from Windows Admin Center to Microsoft Defender for Cloud, you can:
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
Activation and setup of the on-premises management console ensures that:
## Sign in for the first time
-**To sign in to the management console:**
+**To sign in to the on-premises management console:**
1. Navigate to the IP address you received for the on-premises management console during the system installation.
-
-1. Enter the username and password you received for the on-premises management console during the system installation.
+1. Enter the username and password you received for the on-premises management console during the system installation.
If you forgot your password, select the **Recover Password** option, and see [Password recovery](how-to-manage-the-on-premises-management-console.md#password-recovery) for instructions on how to recover your password.
After connecting, you must set up a site with these sensors.
Enable a secured tunneling connection between organizational sensors and the on-premises management console. This setup circumvents interaction with the organizational firewall, and as a result reduces the attack surface.
-Using tunneling allows you to connect to the on-premises management console from its IP address and a single port (that is, 9000) to any sensor.
+Using tunneling allows you to connect to the on-premises management console from its IP address and a single port (9000 by default) to any sensor.
+ **To set up tunneling at the on-premises management console:**
Using tunneling allows you to connect to the on-premises management console from
```bash cyberx-management-tunnel-enable
- service apache2 reload
- sudo cyberx-management-tunnel-add-xsense --xsenseuid <sensorIPAddress> --xsenseport 9000
- service apache2 reload
+
+ ```
+
+Allow a few minutes for the connection to initiate.
+
+You can also customize the port range to a number other than 9000 for example. 10000.
+
+**To use a new port:**
+
+- Sign in to the on-premises management console and run the following command:
+
+ ```bash
+ sudo cyberx-management-tunnel-enable --port 10000
+
```
-**To set up tunneling on the sensor:**
+Disable the connection, when required.
+
+**To disable:**
+
+- Sign in to the on-premises management console and run the following command:
+
+ ```bash
+ cyberx-management-tunnel-disable
+
+ ```
+
+No configuration is needed on the sensor.
+
+**Log files**
-1. Open TCP port 9000 on the sensor (network.properties) manually. If the port is not open, the sensor will reject the connection from the on-premises management console.
+Review log information in the log files.
-2. Sign in to each sensor and run the following commands:
+**To access log files:**
- ```bash
- sudo cyberx-xsense-management-connect -ip <on-premises management console IP Address> -token < Copy the string that appears after the IP colon (:) from the Connection String field, Management Console Connection dialog box>
- sudo cyberx-xsense-management-tunnel
- sudo vi /var/cyberx/properties/network.properties
- opened_tcp_incoming_ports=22,80,443,9000
- sudo cyberx-xsense-network-validation
- sudo /etc/network/if-up.d/iptables-recover
- sudo iptables -nvL
- ```
+1. Log into the On-premises management console and go to: /var/log/apache2.log
+1. Log into the the sensor and go to: /var/cyberx/logs/tunnel.log
## Set up a site
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Open these ports to allow extra services for Defender for IoT.
| Proxy | TCP/UDP | In/Out | 443 | Proxy | To connect the sensor to a proxy server | On-premises management console and Sensor | Proxy server | | Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server. | On-premises management console and Sensor | Syslog server | | LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to log in to the system. | On-premises management console and Sensor | LDAPS server |
-| Tunneling | TCP | In | 9000 </br></br> in addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console. </br></br> Port 22 from the sensor to the on-premises management console. | Monitoring | Tunneling | Sensor | On-premises management console |
+| Tunneling | TCP | In | 9000 </br></br> in addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console. </br></br> Port 22 from the sensor to the on-premises management console. | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
### Plan rack installation
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
* Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration. > [!IMPORTANT] > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows DMS service to upload the database backup files to and use for migrating databases. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
+ > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
> - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration. > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?preserve-view=true&view=sql-server-2017). > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
> [!IMPORTANT] > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.-
- > - Azure Storage Account with Private endpoint is not supported by Azure Database Migration service.
-
+ > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
> - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration. > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server). > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online.md
To complete this tutorial, you need to:
> Regarding the storage account used as part of the migration, you must either: > * Choose to allow all network to access the storage account. > * Turn on [subnet delegation](../virtual-network/manage-subnet-delegation.md) on MI subnet and update the Storage Account firewall rules to allow this subnet.
-
- >- Azure Storage account with the private endpoint is not supported by Azure Database Migration Service.
+ > * You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md). * Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-managed-instance.md
To complete this tutorial, you need to:
- Create a blob container and retrieve its SAS URI by using the steps in the article [Manage Azure Blob Storage resources with Storage Explorer](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container), be sure to select all permissions (Read, Write, Delete, List) on the policy window while creating the SAS URI. This detail provides Azure Database Migration Service with access to your storage account container for uploading the backup files used for migrating databases to SQL Managed Instance. > [!NOTE]
- > Azure Database Migration Service does not support using an account level SAS token when configuring the Storage Account settings during the [Configure Migration Settings](#configure-migration-settings) step.
-
-
- >- Azure Storage Account with Private endpoint is not supported by Azure Database Migration service.
+ > - Azure Database Migration Service does not support using an account level SAS token when configuring the Storage Account settings during the [Configure Migration Settings](#configure-migration-settings) step.
+ > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
[!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)]
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/about-fastpath.md
To configure FastPath, the virtual network gateway must be either:
While FastPath supports most configurations, it doesn't support the following features:
-* UDR on the gateway subnet: This UDR has no impact on the network traffic that FastPath sends directly from your on-premises network to the virtual machines in Azure virtual network.
+* UDR on the gateway subnet: FastPath doesn't honor UDRs configured on the gateway subnet. FastPath traffic bypasses any next-hops determined by UDRs configured on the gateway subnet.
* Basic Load Balancer: If you deploy a Basic internal load balancer in your virtual network or the Azure PaaS service you deploy in your virtual network uses a Basic internal load balancer, the network traffic from your on-premises network to the virtual IPs hosted on the Basic load balancer will be sent to the virtual network gateway. The solution is to upgrade the Basic load balancer to a [Standard load balancer](../load-balancer/load-balancer-overview.md).
This preview is available in the following Azure Regions.
- West US 2 - West US 3
+This preview supports connectivity to the following Azure
+- Azure Cosmos DB
+- Azure Key Vault
+- Azure Storage
+- Third Party Private Link Services
+ > [!NOTE] > Private Link pricing will not apply to traffic sent over ExpressRoute FastPath during Public preview. For more information about pricing, check out the [Private Link pricing page](https://azure.microsoft.com/pricing/details/private-link/). >
expressroute Expressroute Troubleshooting Expressroute Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md
Title: 'Azure ExpressRoute: Verify Connectivity - Troubleshooting Guide'
-description: This page provides instructions on troubleshooting and validating end to end connectivity of an ExpressRoute circuit.
+description: This page provides instructions on troubleshooting and validating end-to-end connectivity of an ExpressRoute circuit.
Previously updated : 10/31/2019 Last updated : 01/07/2022
This article helps you verify and troubleshoot ExpressRoute connectivity. Expres
>
-The purpose of this document is to help user to identify if and where a connectivity issue exists. Thereby, to help seek support from the appropriate team to resolve an issue. If Microsoft support is needed to resolve an issue, open a support ticket with [Microsoft Support][Support].
+The purpose of this document is to help you identify if and where a connectivity issue exists. Thereby, to help seek support from the appropriate team to resolve an issue. If Microsoft support is needed to resolve an issue, open a support ticket with [Microsoft Support][Support].
> [!IMPORTANT] > This document is intended to help diagnosing and fixing simple issues. It is not intended to be a replacement for Microsoft support. Open a support ticket with [Microsoft Support][Support] if you are unable to solve the problem using the guidance provided.
The following are the logical steps, in troubleshooting ExpressRoute circuit:
* [Confirm the traffic flow](#confirm-the-traffic-flow)
+* [Test private peering connectivity](#test-private-peering-connectivity)
+ ## Verify circuit provisioning and state Provisioning an ExpressRoute circuit establishes a redundant Layer 2 connections between CEs/PE-MSEEs (2)/(4) and MSEEs (5). For more information on how to create, modify, provision, and verify an ExpressRoute circuit, see the article [Create and modify an ExpressRoute circuit][CreateCircuit].
Provisioning an ExpressRoute circuit establishes a redundant Layer 2 connections
> ### Verification via the Azure portal
-In the Azure portal, open the ExpressRoute circuit blade. In the ![3][3] section of the blade, the ExpressRoute essentials are listed as shown in the following screenshot:
+In the Azure portal, open the ExpressRoute circuit page. In the ![3][3] section of the page, the ExpressRoute essentials are listed as shown in the following screenshot:
![4][4]
After the service provider has completed the provisioning the ExpressRoute circu
> In IPVPN connectivity model, service providers handle the responsibility of configuring the peerings (layer 3 services). In such a model, after the service provider has configured a peering and if the peering is blank in the portal, try refreshing the circuit configuration using the refresh button on the portal. This operation will pull the current routing configuration from your circuit. >
-In the Azure portal, status of an ExpressRoute circuit peering can be checked under the ExpressRoute circuit blade. In the ![3][3] section of the blade, the ExpressRoute peerings would be listed as shown in the following screenshot:
+In the Azure portal, status of an ExpressRoute circuit peering can be checked under the ExpressRoute circuit page. In the ![3][3] section of the page, the ExpressRoute peerings would be listed as shown in the following screenshot:
![5][5]
-In the preceding example, as noted Azure private peering is provisioned, whereas Azure public and Microsoft peerings are not provisioned. A successfully provisioned peering context would also have the primary and secondary point-to-point subnets listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs. For the peerings that are provisioned, the listing also indicates who last modified the configuration.
+In the preceding example, as noted Azure private peering is provisioned, but Azure public and Microsoft peerings aren't provisioned. A successfully provisioned peering context would also have the primary and secondary point-to-point subnets listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs. For the peerings that are provisioned, the listing also indicates who last modified the configuration.
> [!NOTE] > If enabling a peering fails, check if the primary and secondary subnets assigned match the configuration on the linked CE/PE-MSEE. Also check if the correct *VlanId*, *AzureASN*, and *PeerASN* are used on MSEEs and if these values maps to the ones used on the linked CE/PE-MSEE. If MD5 hashing is chosen, the shared key should be same on MSEE and PE-MSEE/CE pair. Previously configured shared key would not be displayed for security reasons. Should you need to change any of these configuration on an MSEE router, refer to [Create and modify routing for an ExpressRoute circuit][CreatePeering].
$ckt = Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" -Name "Test-ER-
Get-AzExpressRouteCircuitPeeringConfig -Name "MicrosoftPeering" -ExpressRouteCircuit $ckt ```
-If a peering is not configured, there would be an error message. A sample response, when the stated peering (Azure Public peering in this example) is not configured within the circuit:
+If a peering isn't configured, there would be an error message. A sample response, when the stated peering (Azure Public peering in this example) isn't configured within the circuit:
```azurepowershell Get-AzExpressRouteCircuitPeeringConfig : Sequence contains no matching element
Path : 123##
>
-The following example shows the response of the command for a peering that does not exist:
+The following example shows the response of the command for a peering that doesn't exist:
```azurepowershell Get-AzExpressRouteCircuitRouteTable : The BGP Peering AzurePublicPeering with Service Key ********************* is not found.
Get-AzExpressRouteCircuitRouteTable : The BGP Peering AzurePublicPeering with Se
StatusCode: 400 ```
+## Test private peering connectivity
+
+Test your private peering connectivity by **counting** packets arriving and leaving the Microsoft edge of your ExpressRoute circuit, on the Microsoft Enterprise Edge (MSEE) devices. This diagnostic tool works by applying an Access Control List (ACL) to the MSEE to count the number of packets that hit specific ACL rules. Using this tool will allow you to confirm connectivity by answering the questions such as:
+
+* Are my packets getting to Azure?
+* Are they getting back to on-prem?
+
+### Run test
+1. To access this diagnostic tool, select **Diagnose and solve problems** from your ExpressRoute circuit in the Azure portal.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/diagnose-problems.png" alt-text="Screenshot of diagnose and solve problem page from ExpressRoute circuit.":::
+
+1. Select the **Connectivity issues** card under **Common problems**.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/connectivity-issues.png" alt-text="Screenshot of connectivity issues option.":::
+
+1. In the dropdown for *Tell us more about the problem you are experiencing*, select **Connectivity to Azure Private, Azure Public, or Dynamics 365 services.**
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/tell-us-more.png" alt-text="Screenshot of drop-down option for problem user is experiencing.":::
+
+1. Scroll down to the **Test your private peering connectivity** section and expand it.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/test-private-peering.png" alt-text="Screenshot of troubleshooting connectivity issues options.":::
+
+1. Execute the [PsPing](https://docs.microsoft.com/sysinternals/downloads/psping) test from your on-premises IP address to your Azure IP address and keep it running during the connectivity test.
+
+1. Fill out the fields of the form, making sure to enter the same on-premises and Azure IP addresses used in Step 5. Then select **Submit** and then wait for your results to load. Once your results are ready, review the information for interpreting them below.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/form.png" alt-text="Screenshot of debug ACL form.":::
+
+### Interpreting results
+Your test results for each MSEE device will look like the example below. You'll have two sets of results for the primary and secondary MSEE devices. Review the number of matches in and out and use the following scenarios to interpret the results:
+* **You see packet matches sent and received on both MSEEs:** This indicates healthy traffic inbound to and outbound from the MSEE on your circuit. If loss is occurring either on-premises or in Azure, it is happening downstream from the MSEE.
+* **If testing PsPing from on-premises to Azure *(received)* results show matches, but *sent* results show NO matches:** This indicates that traffic is getting inbound to Azure, but isn't returning to on-prem. Check for return-path routing issues (for example, are you advertising the appropriate prefixes to Azure? Is there a UDR overriding prefixes?).
+* **If testing PsPing from Azure to on-premises *(sent)* results show NO matches, but *(received)* results show matches:** This indicates that traffic is getting to on-premises, but isn't getting back. You should work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit.
+* **One MSEE shows NO matches, while the other shows good matches:** This indicates that one MSEE isn't receiving or passing any traffic. It could be offline (for example, BGP/ARP down).
+
+#### Example
+```
+src 10.0.0.0 dst 20.0.0.0 dstport 3389 (received): 120 matches
+src 20.0.0.0 srcport 3389 dst 10.0.0.0 (sent): 120 matches
+```
+This test result has the following properties:
+
+* IP Port: 3389
+* On-prem IP Address CIDR: 10.0.0.0
+* Azure IP Address CIDR: 20.0.0.0
+ ## Next Steps For more information or help, check out the following links:
firewall Firewall Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-performance.md
+
+ Title: Azure Firewall performance
+description: Compare Azure Firewall performance for Azure Firewall Standard and Premium
++++ Last updated : 01/11/2022+++
+# Azure Firewall performance
+
+Reliable firewall performance is essential to operate and protect your virtual networks in Azure. More advanced features (like those found in Azure Firewall Premium) require more processing complexity. This will affect firewall performance and impact the overall network performance.
+
+Azure Firewall has two versions: Standard and Premium.
+
+- Azure Firewall Standard
+
+ Azure Firewall Standard has been generally available since September 2018. It's cloud native, highly available, with built-in auto scaling firewall-as-a-service. You can centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level-filtering rules, and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains.
+- Azure Firewall Premium
+
+ Azure Firewall Premium is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. The features that might affect the performance of the Firewall are TLS inspection and IDPS (Intrusion Detection and Prevention).
+
+For more information about Azure Firewall, see [What is Azure Firewall?](overview.md)
+
+## Performance testing
+
+Before deploying Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. It's recommended to evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. This includes the network topology, and emulating the actual characteristics of the expected traffic through the firewall.
+
+## Performance data
+
+The following set of performance results demonstrates the maximal Azure Firewall throughput in various use cases. All use cases were measured while Threat intelligence mode was set to alert/deny.
++
+|Firewall type and use case |TCP/UDP bandwidth (Gbps) |HTTP/S bandwidth (Gbps) |
+||||
+|Standard |30|30|
+|Premium (no TLS/IDPS) |30|30|
+|Premium with TLS |-|30|
+|Premium with IDS |30|30|
+|Premium with IPS |10|10|
+
+> [!NOTE]
+> IPS (Intrusion Prevention System) takes place when one or more signatures are configured to *Alert and Deny* mode.
+
+Azure Firewall PremiumΓÇÖs new performance boost functionality is now in public preview and provides you with enhancements to the overall firewall performance as shown below:
++
+|Firewall use case |Without performance boost (Gbps) |With performance boost (Gbps) |
+||||
+|Standard<br>Max bandwidth for single TCP connection |1.3|-|
+|Premium<br>Max bandwidth for single TCP connection |2.6|9.5|
+|Premium max bandwidth with TLS/IDS|30|100|
+
+Performance values are calculated with Azure Firewall at full scale and with Premium performance boost enabled. Actual performance may vary depending on your rule complexity and network configuration. These metrics are updated periodically as performance continuously evolves with each release.
+
+## How to configure Premium performance boost (preview)
+
+As more applications are moved to the cloud, the network element performance becomes a bottleneck. As a result, Premium performance boost (preview) for Azure Firewall Premium is available to allow more scalability for those deployments.
+
+To enable the Azure Firewall Premium performance boost, run the following Azure PowerShell commands. This feature is applied at the **subscription** level for all Firewalls (VNet Firewalls and SecureHub Firewalls). Currently, Azure Firewall Premium Performance boost is not recommended SecureHub Firewalls. Check back here for the latest updates as we work to change this recommendation. Also, this setting does not have any effect on standard Firewalls.
+
+After you run the Azure PowerShell commands, an update operation needs to be run on the Azure Firewall for the feature to immediately take effect. This update operation can be a rule change (least intrusive), a setting configuration, or a Stop/Start operation. Otherwise, the firewall/s will update with the feature within several days.
+
+Run the following Azure PowerShell to configure the Azure Firewall Premium performance boost:
+
+```azurepowershell
+Connect-AzAccount
+
+Select-AzSubscription -Subscription "subscription_id or subscription_name"
+
+Register-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
+```
+
+Run the following Azure PowerShell to turn it off:
+
+```azurepowershell
+Unregister-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
+```
+++
+## Next steps
+
+- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/assignment-structure.md
You use JSON to create a policy assignment. The policy assignment contains eleme
- policy definition - non-compliance messages - parameters
+- identity
For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode with dynamic parameters:
In this example, the parameters previously defined in the policy definition are
same policy definition is reusable with a different set of parameters for a different department, reducing the duplication and complexity of policy definitions while providing flexibility.
+## Identity
+For policy assignments with effect set to **deployIfNotExisit** or **modify**, it is required to have an identity property to do remediation on non-compliant resources. When using identity, the user must also specify a location for the assignment.
+
+```json
+# System assigned identity
+ "identity": {
+ "type": "SystemAssigned"
+ }
+# User assigned identity
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/SubscriptionID/resourceGroups/testResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/test-identity": {}
+ }
+ },
+```
+++ ## Next steps - Learn about the [policy definition structure](./definition-structure.md).
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/effects.md
location of the Constraint template to use in Kubernetes to limit the allowed co
## DeployIfNotExists Similar to AuditIfNotExists, a DeployIfNotExists policy definition executes a template deployment
-when the condition is met.
+when the condition is met. Policy assignments with effect set as DeployIfNotExists require a [managed identity](../how-to/remediate-resources.md) to do remediation.
> [!NOTE] > [Nested templates](../../../azure-resource-manager/templates/linked-templates.md#nested-template)
Modify is used to add, update, or remove properties or tags on a subscription or
creation or update. A common example is updating tags on resources such as costCenter. Existing non-compliant resources can be remediated with a [remediation task](../how-to/remediate-resources.md). A single Modify rule can have any number of
-operations.
+operations. Policy assignments with effect set as Modify require a [managed identity](../how-to/remediate-resources.md) to do remediation.
The following operations are supported by Modify:
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/remediate-resources.md
Title: Remediate non-compliant resources description: This guide walks you through the remediation of resources that are non-compliant to policies in Azure Policy. Previously updated : 08/17/2021 Last updated : 12/1/2021 # Remediate non-compliant resources with Azure Policy
understand and accomplish remediation with Azure Policy.
## How remediation security works
-When Azure Policy runs the template in the **deployIfNotExists** policy definition, it does so using
-a [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md).
-Azure Policy creates a managed identity for each assignment, but must have details about what roles
+When Azure Policy starts a template deployment when evaluating **deployIfNotExists** policies or modifies a resource when evaluating **modify** policies, it does so using
+a [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md) that is associated with the policy assignment.
+Policy assignments can either use a system assigned managed identity that is created by the policy service or a user assigned identity provided by the user. The managed identity needs to be granted the appropriate roles required for remediating resources
to grant the managed identity. If the managed identity is missing roles, an error is displayed during the assignment of the policy or an initiative. When using the portal, Azure Policy automatically grants the managed identity the listed roles once assignment starts. When using SDK, the roles must manually be granted to the managed identity. The _location_ of the managed identity doesn't impact its operation with Azure Policy. > [!IMPORTANT] > In the following scenarios, the assignment's managed identity must be
doesn't impact its operation with Azure Policy.
The first step is to define the roles that **deployIfNotExists** and **modify** needs in the policy definition to successfully deploy the content of your included template. Under the **details**
-property, add a **roleDefinitionIds** property. This property is an array of strings that match
+property in the policy definition, add a **roleDefinitionIds** property. This property is an array of strings that match
roles in your environment. For a full example, see the [deployIfNotExists example](../concepts/effects.md#deployifnotexists-example) or the [modify examples](../concepts/effects.md#modify-examples).
az role definition list --name 'Contributor'
## Manually configure the managed identity
-When creating an assignment using the portal, Azure Policy both generates the managed identity and
-grants it the roles defined in **roleDefinitionIds**. In the following conditions, steps to create
+When creating an assignment using the portal, Azure Policy can both generate a managed identity and
+grant it the roles defined in **roleDefinitionIds**. In the following conditions, steps to create
the managed identity and assign it permissions must be done manually: - While using the SDK (such as Azure PowerShell) - When a resource outside the assignment scope is modified by the template - When a resource outside the assignment scope is read by the template
+## Configure a managed identity through the Azure portal
+
+When creating an assignment using the portal, you can select either a system assigned managed identity or a user assigned managed identity.
+
+To set a system assigned managed identity in the portal:
+
+1. On the **Remediation** tab of the create/edit assignment view, under **Types of Managed Identity**, ensure that **System assigned managed identity**
+is selected.
+
+1. Specify the location at which the managed identity is to be located.
+
+To set a user assigned managed identity in the portal:
+
+1. On the **Remediation** tab of the create/edit assignment view, under **Types of Managed Identity**, ensure that **User assigned managed identity**
+is selected.
+
+1. Specify the scope where the managed identity is hosted. The scope of the managed identity does not have to equate to the scope of the assignment, but it must be in the same tenant.
+
+1. Under **Existing user assigned identities**, select the managed identity.
+
+ > [!NOTE]
+ > If the managed identity does not have the permissions needed to execute the required remediation task, it will be granted permissions *automatically* only through the portal. For all other methods, permissions must be configured manually.
+ >
+ ### Create managed identity with PowerShell
-To create a managed identity during the assignment of the policy, **Location** must be defined and
-**AssignIdentity** used. The following example gets the definition of the built-in policy **Deploy
+To create an identity during the assignment of the policy, **Location** must be defined and **Identity** used. The following example gets the definition of the built-in policy **Deploy SQL DB transparent data encryption** sets the target resource group, and then creates the assignment using a **system assigned** managed identity.
+
+```azurepowershell-interactive
+# Login first with Connect-AzAccount if not using Cloud Shell
+
+# Get the built-in "Deploy SQL DB transparent data encryption" policy definition
+$policyDef = Get-AzPolicyDefinition -Id '/providers/Microsoft.Authorization/policyDefinitions/86a912f6-9a06-4e26-b447-11b16ba8659f'
+
+# Get the reference to the resource group
+$resourceGroup = Get-AzResourceGroup -Name 'MyResourceGroup'
+
+# Create the assignment using the -Location and -Identity properties
+$assignment = New-AzPolicyAssignment -Name 'sqlDbTDE' -DisplayName 'Deploy SQL DB transparent data encryption' -Scope $resourceGroup.ResourceId -PolicyDefinition $policyDef -Location 'westus' -IdentityType "SystemAssigned"
+```
+
+The following example gets the definition of the built-in policy **Deploy
SQL DB transparent data encryption**, sets the target resource group, and then creates the
-assignment.
+assignment using an **user assigned** managed identity.
```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell
$policyDef = Get-AzPolicyDefinition -Id '/providers/Microsoft.Authorization/poli
# Get the reference to the resource group $resourceGroup = Get-AzResourceGroup -Name 'MyResourceGroup'
-# Create the assignment using the -Location and -AssignIdentity properties
-$assignment = New-AzPolicyAssignment -Name 'sqlDbTDE' -DisplayName 'Deploy SQL DB transparent data encryption' -Scope $resourceGroup.ResourceId -PolicyDefinition $policyDef -Location 'westus' -AssignIdentity
+# Get the existing user assigned managed identity ID
+$userassignedidentity = Get-AzUserAssignedIdentity -ResourceGroupName $rgname -Name $userassignedidentityname
+$userassignedidentityid = $userassignedidentity.Id
+
+# Create the assignment using the -Location and -Identity properties
+$assignment = New-AzPolicyAssignment -Name 'sqlDbTDE' -DisplayName 'Deploy SQL DB transparent data encryption' -Scope $resourceGroup.ResourceId -PolicyDefinition $policyDef -Location 'westus' -IdentityType "UserAssigned" -IdentityId $userassignedidentityid
```
-The `$assignment` variable now contains the principal ID of the managed identity along with the
-standard values returned when creating a policy assignment. It can be accessed through
-`$assignment.Identity.PrincipalId`.
+The `$assignment` variable now contains the principal ID of the managed identity along with the standard values returned when creating a policy assignment. It can be accessed through
+`$assignment.Identity.PrincipalId` for system assigned managed identities and `$assignment.Identity.UserAssignedIdentities[$userassignedidentityid].PrincipalId` for user assigned managed identities.
-### Grant defined roles with PowerShell
+### Grant a managed identity defined roles with PowerShell
The new managed identity must complete replication through Azure Active Directory before it can be granted the needed roles. Once replication is complete, the following example iterates the policy
if ($roleDefinitionIds.Count -gt 0)
} ```
-### Grant defined roles through portal
+### Grant a managed identity defined roles through the portal
There are two ways to grant an assignment's managed identity the defined roles using the portal, by using **Access control (IAM)** or by editing the policy or initiative assignment and selecting
To add a role to the assignment's managed identity, follow these steps:
Leave **Assign access to** set to the default of 'Azure AD user, group, or application'. In the **Select** box, paste or type the portion of the assignment resource ID located earlier. Once the search completes, select the object with the same name to select ID and select **Save**.-
+
## Create a remediation task
-### Create a remediation task through portal
+The following sections describe how to create a remediation task.
+
+### Create a remediation task through the portal
During evaluation, the policy assignment with **deployIfNotExists** or **modify** effects determines if there are non-compliant resources or subscriptions. When non-compliant resources or subscriptions
To create a **remediation task**, follow these steps:
> An alternate way to open the **remediation task** page is to find and select the policy from > the **Compliance** page, then select the **Create Remediation Task** button.
-1. On the **New remediation task** page, filter the resources to remediate by using the **Scope**
+1. On the **New remediation task** page, optional remediation settings are shown:
+
+ - **Failure Threshold percentage** - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%.
+ - **Resource Count** - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number of is 10,000 resources.
+ - **Parallel Deployments** - Determines how many resources to remediate at the same time. The allowed values are 1 to 15 resources at a time. The default value is 10.
+
+ > [!NOTE]
+ > These settings cannot be changed once the remediation task has started.
+
+1. On the same page, filter the resources to remediate by using the **Scope**
ellipses to pick child resources from where the policy is assigned (including down to the individual resource objects). Additionally, use the **Locations** dropdown list to further filter
- the resources. Only resources listed in the table will be remediated.
+ the resources.
:::image type="content" source="../media/remediate-resources/select-resources.png" alt-text="Screenshot of the Remediate node and the grid of resources to remediate." border="false":::
To create a **remediation task**, follow these steps:
:::image type="content" source="../media/remediate-resources/task-progress.png" alt-text="Screenshot of the Remediation tasks tab and progress of existing remediation tasks." border="false":::
-1. Select on the **remediation task** from the policy compliance page to get details about the
- progress. The filtering used for the task is shown along with a list of the resources being
+1. Select the **remediation task** from the policy compliance page to get details about the
+ progress. The filtering used for the task is shown along with status and a list of resources being
remediated. 1. From the **Remediation task** page, select and hold (or right-click) on a resource to view either the remediation
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| acceptOptionalTelemetry | **Yes** or **No** | A shortcut to accept/deny optional telemetry and bypass the telemetry prompt. | | cpuCount | Integer value between 1 and the device's CPU cores | Number of CPU cores for the VM.<br><br>**Default value**: 1 vCore. | | memoryInMB | Integer value between 1024 and the maximum amount of free memory of the device |Memory allocated for the VM.<br><br>**Default value**: 1024 MB. |
-| vmDiskSize | Between 8 GB and 256 GB | Maximum disk size of the dynamically expanding virtual hard disk.<br><br>**Default value**: 10 GB. |
+| vmDiskSize | Between 8 GB and 2 TB | Maximum disk size of the dynamically expanding virtual hard disk.<br><br>**Default value**: 16 GB. |
| vswitchName | Name of the virtual switch | Name of the virtual switch assigned to the EFLOW VM. | | vswitchType | **Internal** or **External** | Type of the virtual switch assigned to the EFLOW VM. | | ip4Address | IPv4 Address in the range of the DCHP Server Scope | Static Ipv4 address of the EFLOW VM. |
iot-hub Iot Concepts And Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-concepts-and-iot-hub.md
Last updated 07/07/2021+ #Customer intent: As a developer new to IoT Hub, learn the basic concepts.
You can integrate IoT Hub with other Azure services to build complete, end-to-en
* [Azure Stream Analytics](../stream-analytics/index.yml) to run real-time analytic computations on the data streaming from your devices.
+[IoT Central](../iot-central/core/overview-iot-central.md) applications use multiple IoT hubs as part of their scalable and resilient infrastructure.
+ IoT Hub has a 99.9% [Service Level Agreement for IoT Hub](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole. Each Azure subscription has default quota limits in place to prevent service abuse. These limits could impact the scope of your IoT solution. The current limit on a per-subscription basis is 50 IoT hubs per subscription. You can request quota increases by contacting support. For more information, see [IoT Hub Quotas and Throttling](iot-hub-devguide-quotas-throttling.md). For more details on quota limits, see one of the following articles:
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/how-to-configure-key-rotation.md
Last updated 11/24/2021
# Configure key auto-rotation in Azure Key Vault (preview)
-> [!IMPORTANT]
+> [!WARNING]
> This feature is currently disabled due to an issue with the service. ## Overview
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/overview-what-is-azure-load-testing.md
You can create a load test by using existing test scripts based on Apache JMeter
Azure Load Testing test engines abstract the required infrastructure for running a high-scale load test. The test engines run the Apache JMeter script to simulate a large number of virtual users simultaneously accessing your application endpoints. To scale out the load test, you can configure the number of test engines.
+Azure Load Testing uses Apache JMeter version 5.4.1 for running load tests.
+ The application can be hosted anywhere: in Azure, on-premises, or in other clouds. During the load test, the service collects the following resource metrics and displays them in a dashboard: - *Client-side metrics* give you details reported by the test engine. These details include the number of virtual users, the request response time, or the number of requests per second.
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/quickstart-create-and-run-load-test.md
Previously updated : 11/30/2021 Last updated : 01/11/2022 adobe-target: true
In this section, you'll create a sample Apache JMeter script that you'll use in
<?xml version="1.0" encoding="UTF-8"?> <jmeterTestPlan version="1.2" properties="5.0" jmeter="5.4.1"> <hashTree>
- <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Test Plan" enabled="true">
+ <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Azure Load Testing Quickstart" enabled="true">
<stringProp name="TestPlan.comments"></stringProp> <boolProp name="TestPlan.functional_mode">false</boolProp> <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
In this section, you'll create a sample Apache JMeter script that you'll use in
<stringProp name="TestPlan.user_define_classpath"></stringProp> </TestPlan> <hashTree>
- <kg.apc.jmeter.threads.UltimateThreadGroup guiclass="kg.apc.jmeter.threads.UltimateThreadGroupGui" testclass="kg.apc.jmeter.threads.UltimateThreadGroup" testname="jp@gc - Ultimate Thread Group" enabled="true">
- <collectionProp name="ultimatethreadgroupdata">
- <collectionProp name="1400604752">
- <stringProp name="1567">5</stringProp>
- <stringProp name="0">0</stringProp>
- <stringProp name="48873">30</stringProp>
- <stringProp name="49710">60</stringProp>
- <stringProp name="10">10</stringProp>
- </collectionProp>
- </collectionProp>
+ <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Thread Group" enabled="true">
+ <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
<elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true"> <boolProp name="LoopController.continue_forever">false</boolProp> <intProp name="LoopController.loops">-1</intProp> </elementProp>
- <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
- </kg.apc.jmeter.threads.UltimateThreadGroup>
+ <stringProp name="ThreadGroup.num_threads">5</stringProp>
+ <stringProp name="ThreadGroup.ramp_time">10</stringProp>
+ <boolProp name="ThreadGroup.scheduler">true</boolProp>
+ <stringProp name="ThreadGroup.duration">120</stringProp>
+ <stringProp name="ThreadGroup.delay">5</stringProp>
+ <boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp>
+ </ThreadGroup>
<hashTree>
- <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="homepage" enabled="true">
+ <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Homepage" enabled="true">
<elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true"> <collectionProp name="Arguments.arguments"/> </elementProp> <stringProp name="HTTPSampler.domain">your-endpoint-url</stringProp> <stringProp name="HTTPSampler.port"></stringProp>
- <stringProp name="HTTPSampler.protocol">https</stringProp>
+ <stringProp name="HTTPSampler.protocol"></stringProp>
<stringProp name="HTTPSampler.contentEncoding"></stringProp> <stringProp name="HTTPSampler.path"></stringProp> <stringProp name="HTTPSampler.method">GET</stringProp>
In this section, you'll create a sample Apache JMeter script that you'll use in
<boolProp name="HTTPSampler.use_keepalive">true</boolProp> <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp> <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
- <stringProp name="HTTPSampler.implementation">HttpClient4</stringProp>
- <stringProp name="HTTPSampler.connect_timeout">60000</stringProp>
- <stringProp name="HTTPSampler.response_timeout">60000</stringProp>
+ <stringProp name="HTTPSampler.connect_timeout"></stringProp>
+ <stringProp name="HTTPSampler.response_timeout"></stringProp>
</HTTPSamplerProxy> <hashTree/> </hashTree>
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-cicd-github-actions.md
The sample application's source repo includes an Apache JMeter script named *Sam
## Set up GitHub access permissions for Azure
-In this section, you'll configure your GitHub repository to have permissions for accessing the Azure Load Testing resource.
+The GitHub Actions workflow needs to authenticate with Azure to access Azure resources. In the sample application, you use the [Azure Login](https://github.com/Azure/login) action and an Azure Active Directory service principal to authenticate with Azure.
-To access Azure resources, you'll create an Azure Active Directory service principal and use role-based access control to assign the necessary permissions.
+In this section, you'll configure your GitHub repository to have permissions to access your Azure load testing resource:
-1. Run the following Azure CLI command to create a service principal:
+1. Run the following Azure CLI command to create a service principal and assign the Contributor role:
```azurecli az ad sp create-for-rbac --name "my-load-test-cicd" --role contributor \
To access Azure resources, you'll create an Azure Active Directory service princ
} ```
+ > [!NOTE]
+ > Azure Login supports multiple ways to authenticate with Azure. For other authentication options, see the [Azure and GitHub integration site](/azure/developer/github).
+ 1. Go to your forked GitHub repository for the sample application.
+ You'll add a GitHub secret to your repository for the service principal you created in the previous step. The Azure Login action uses this secret to authenticate with Azure.
+ 1. Add a new secret to your GitHub repository by selecting **Settings** > **Secrets** > **New repository secret**. :::image type="content" source="./media/tutorial-cicd-github-actions/github-new-secret.png" alt-text="Screenshot that shows selections for adding a new repository secret to your GitHub repo.":::
To access Azure resources, you'll create an Azure Active Directory service princ
--subscription "<subscription-name-or-id>" ```
+You can now use the `AZURE_CREDENTIALS` secret with the Azure Login action in your CI/CD workflow. The following code snippet describes how this works for the sample application:
+
+```yml
+jobs:
+ build-and-deploy:
+ # The type of runner that the job will run on
+ runs-on: ubuntu-latest
+
+ # Steps represent a sequence of tasks that will be executed as part of the job
+ steps:
+ # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
+ - name: Checkout GitHub Actions
+ uses: actions/checkout@v2
+
+ - name: Login to Azure
+ uses: azure/login@v1
+ continue-on-error: false
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+```
+ ## Configure the GitHub Actions workflow to run a load test In this section, you'll set up a GitHub Actions workflow that triggers the load test. The sample application repository contains a workflow file *SampleApp.yaml*. The workflow first deploys the sample web application to Azure App Service, and then invokes the load test. The GitHub action uses an environment variable to pass the URL of the web application to the Apache JMeter script.
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview.md
You might also want to explore other quickstart guides for Azure Logic Apps:
* [Create a multi-tenant based logic app using an ARM template](quickstart-create-deploy-azure-resource-manager-template.md)
-* [Create a multi-tenant based logic app using the Azure CLI](quickstart-create-deploy-azure-resource-manager-template.md)
+* [Create a multi-tenant based logic app using the Azure CLI](quickstart-logic-apps-azure-cli.md)
## Other resources
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
Title: Access Azure resources from managed endpoint
+ Title: Access Azure resources from an online endpoint
description: Securely access Azure resources for your machine learning model deployment from an online endpoint with a system-assigned or user-assigned managed identity.
Previously updated : 12/22/2021 Last updated : 01/11/2022
This guide assumes you don't have a managed identity, a storage account or an on
## Define configuration YAML file for deployment
-To deploy a managed endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-managed-online.md) document.
+To deploy an online endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-managed-online.md) document.
The YAML files in the following examples are used to create online endpoints.
Decide on the name of your user identity name, and export that value as an envir
## Create the managed identity
-To access Azure resources, create a system-assigned or user-assigned managed identity for your endpoint.
+To access Azure resources, create a system-assigned or user-assigned managed identity for your online endpoint.
# [System-assigned managed identity](#tab/system-identity)
To create a user-assigned managed identity, use the following:
## Create storage account and container For this example, create a blob storage account and blob container, and then upload the previously created text file to the blob container.
-This is the storage account and blob container that you'll give the endpoint and managed identity access to.
+This is the storage account and blob container that you'll give the online endpoint and managed identity access to.
# [System-assigned managed identity](#tab/system-identity)
Then, upload file in container.
The following code creates an online endpoint without specifying a deployment. # [System-assigned managed identity](#tab/system-identity)
-When you create a managed endpoint, a system-assigned managed identity is created for the endpoint by default.
+When you create an online endpoint, a system-assigned managed identity is created for the endpoint by default.
>[!IMPORTANT] > System assigned managed identities are immutable and can't be changed once created.
If you encounter any issues, see [Troubleshooting online endpoints deployment an
>[!IMPORTANT] > Online endpoints require Azure Container Registry pull permission, AcrPull permission, to the container registry and Storage Blob Data Reader permission to the default datastore of the workspace.
-You can allow the managed endpoint permission to access your storage via its system-assigned managed identity or give permission to the user-assigned managed identity to access the storage account created in the previous section.
+You can allow the online endpoint permission to access your storage via its system-assigned managed identity or give permission to the user-assigned managed identity to access the storage account created in the previous section.
# [System-assigned managed identity](#tab/system-identity)
Refer to the following script to understand how to use your identity token to ac
## Create a deployment with your configuration
-Create a deployment that's associated with the managed endpoint. [Learn more about deploying to online endpoints](how-to-deploy-managed-online-endpoints.md).
+Create a deployment that's associated with the online endpoint. [Learn more about deploying to online endpoints](how-to-deploy-managed-online-endpoints.md).
>[!WARNING] > This deployment can take approximately 8-14 minutes depending on whether the underlying environment/image is being built for the first time. Subsequent deployments using the same environment will go quicker.
When your deployment completes, the model, the environment, and the endpoint ar
## Confirm your endpoint deployed successfully
-Once your endpoint is deployed, confirm its operation. Details of inferencing vary from model to model. For this guide, the JSON query parameters look like:
+Once your online endpoint is deployed, confirm its operation. Details of inferencing vary from model to model. For this guide, the JSON query parameters look like:
:::code language="json" source="~/azureml-examples-main/cli/endpoints/online/model-1/sample-request.json" :::
To call your endpoint, run:
## Delete the endpoint and storage account
-If you don't plan to continue using the deployed endpoint and storage, delete them to reduce costs. When you delete the endpoint, all of its associated deployments are deleted as well.
+If you don't plan to continue using the deployed online endpoint and storage, delete them to reduce costs. When you delete the endpoint, all of its associated deployments are deleted as well.
# [System-assigned managed identity](#tab/system-identity)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-workspace-vnet.md
Previously updated : 11/19/2021 Last updated : 01/11/2022
When ACR is behind a virtual network, Azure Machine Learning cannot use it to di
> [!WARNING] > If your Azure Container Registry uses a private endpoint to communicate with the virtual network, you cannot use a managed identity with an Azure Machine Learning compute cluster. To use a managed identity with a compute cluster, use a service endpoint with the Azure Container Registry for the workspace.
+### Azure Monitor
+
+> [!WARNING]
+> Azure Monitor supports using Azure Private Link to connect to a VNet. However, Azure Machine Learning does not support using a private link-enabled Azure Monitor (including Azure Application Insights). Do __not_ configure private link for the Azure Monitor or Azure Application Insights you plan to use with Azure Machine Learning.
+ ## Required public internet access [!INCLUDE [machine-learning-required-public-internet-access](../../includes/machine-learning-public-internet-access.md)]
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments with latest framework versions in Az
### PyTorch
-**Name**: AzureML-pytorch-1.10-ubuntu18.04-py38-cuda11-gpu (**Preview**)
-**Description**: An environment for deep learning with PyTorch containing the AzureML Python SDK and additional python packages.
+**Name**: AzureML-pytorch-1.10-ubuntu18.04-py38-cuda11-gpu
+**Description**: An environment for deep learning with PyTorch containing the AzureML Python SDK and other python packages.
The following Dockerfile can be customized for your personal workflows. ```dockerfile
-FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.1-cudnn8-ubuntu18.04:20211124.v1
+FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.1-cudnn8-ubuntu18.04:20211221.v1
ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/pytorch-1.10
RUN pip install 'matplotlib>=3.3,<3.4' \
'scipy>=1.5,<1.8' \ 'numpy>=1.10,<1.22' \ 'ipykernel~=6.0' \
- 'azureml-core==1.36.0.post2' \
- 'azureml-defaults==1.36.0' \
- 'azureml-mlflow==1.36.0' \
- 'azureml-telemetry==1.36.0' \
+ 'azureml-core==1.37.0.post1' \
+ 'azureml-defaults==1.37.0' \
+ 'azureml-mlflow==1.37.0' \
+ 'azureml-telemetry==1.37.0' \
'tensorboard==2.6.0' \ 'tensorflow-gpu==2.6.0' \ 'onnxruntime-gpu>=1.7,<1.10' \
Other available PyTorch environments:
### LightGBM **Name**: AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu
-**Description**: An environment for machine learning with Scikit-learn, LightGBM, XGBoost, Dask containing the AzureML Python SDK and additional packages.
+**Description**: An environment for machine learning with Scikit-learn, LightGBM, XGBoost, Dask containing the AzureML Python SDK and other packages.
The following Dockerfile can be customized for your personal workflows. ```dockerfile
-FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20211124.v1
+FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20211221.v1
ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/lightgbm
RUN HOROVOD_WITH_TENSORFLOW=1 \
'dask-ml~=1.9.0' \ 'adlfs~=0.7.0' \ 'ipykernel~=6.0' \
- 'azureml-core==1.36.0.post2' \
- 'azureml-defaults==1.36.0' \
- 'azureml-mlflow==1.36.0' \
- 'azureml-telemetry==1.36.0'
+ 'azureml-core==1.37.0.post1' \
+ 'azureml-defaults==1.37.0' \
+ 'azureml-mlflow==1.37.0' \
+ 'azureml-telemetry==1.37.0'
# This is needed for mpi to locate libpython ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
### Sklearn **Name**: AzureML-sklearn-0.24-ubuntu18.04-py37-cuda11-gpu
-**Description**: An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and additional python packages.
+**Description**: An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and other python packages.
The following Dockerfile can be customized for your personal workflows. ```dockerfile
-FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20211124.v1
+FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20211221.v1
ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/sklearn-0.24.1
RUN pip install 'matplotlib>=3.3,<3.4' \
'scipy>=1.5,<1.6' \ 'numpy>=1.10,<1.20' \ 'ipykernel~=6.0' \
- 'azureml-core==1.36.0.post2' \
- 'azureml-defaults==1.36.0' \
- 'azureml-mlflow==1.36.0' \
- 'azureml-telemetry==1.36.0' \
+ 'azureml-core==1.37.0.post1' \
+ 'azureml-defaults==1.37.0' \
+ 'azureml-mlflow==1.37.0' \
+ 'azureml-telemetry==1.37.0' \
'scikit-learn==0.24.1' # This is needed for mpi to locate libpython
ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
### TensorFlow **Name**: AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
-**Description**: An environment for deep learning with TensorFlow containing the AzureML Python SDK and additional python packages.
+**Description**: An environment for deep learning with TensorFlow containing the AzureML Python SDK and other python packages.
The following Dockerfile can be customized for your personal workflows. ```dockerfile
-FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20211124.v1
+FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20211221.v1
ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/tensorflow-2.4
RUN HOROVOD_WITH_TENSORFLOW=1 \
'scipy>=1.5,<1.6' \ 'numpy>=1.10,<1.20' \ 'ipykernel~=6.0' \
- 'azureml-core==1.36.0.post2' \
- 'azureml-defaults==1.36.0' \
- 'azureml-mlflow==1.36.0' \
- 'azureml-telemetry==1.36.0' \
+ 'azureml-core==1.37.0.post1' \
+ 'azureml-defaults==1.37.0' \
+ 'azureml-mlflow==1.37.0' \
+ 'azureml-telemetry==1.37.0' \
'tensorboard==2.4.0' \ 'tensorflow-gpu==2.4.1' \ 'tensorflow-datasets==4.3.0' \
machine-learning Tutorial Deploy Models With Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-deploy-models-with-aml.md
- Title: "Image classification tutorial: Deploy models"-
-description: This tutorial shows how to use Azure Machine Learning to deploy an image classification model with scikit-learn in a Python Jupyter notebook.
------- Previously updated : 10/19/2021-
-#Customer intent: As a professional data scientist, I can deploy the model previously trained in tutorial1.
--
-# Tutorial: Deploy an image classification model in Azure Container Instances
--
-This tutorial is **part two of a two-part tutorial series**. In the [previous tutorial](tutorial-train-models-with-aml.md), you trained machine learning models and then registered a model in your workspace on the cloud. Now you're ready to deploy the model as a web service. A web service is an image, in this case a Docker image. It encapsulates the scoring logic and the model itself.
-
-In this part of the tutorial, you use Azure Machine Learning for the following tasks:
-
-> [!div class="checklist"]
-> * Set up your testing environment.
-> * Retrieve the model from your workspace.
-> * Deploy the model to Container Instances.
-> * Test the deployed model.
-
-Container Instances is a great solution for testing and understanding the workflow. For scalable production deployments, consider using Azure Kubernetes Service. For more information, see [how to deploy and where](how-to-deploy-and-where.md).
-
->[!NOTE]
-> Code in this article was tested with Azure Machine Learning SDK version 1.0.83.
-
-## Prerequisites
-
-To run the notebook, first complete the model training in [Tutorial (part 1): Train an image classification model](tutorial-train-models-with-aml.md). Then open the *img-classification-part2-deploy.ipynb* notebook in your cloned *tutorials/image-classification-mnist-data* folder.
-
-This tutorial is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to use it on your own [local environment](how-to-configure-environment.md#local). Make sure you have installed `matplotlib` and `scikit-learn` in your environment.
-
-> [!Important]
-> The rest of this article contains the same content as you see in the notebook.
->
-> Switch to the Jupyter notebook now if you want to read along as you run the code.
-> To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar.
-
-## <a name="start"></a>Set up the environment
-
-Start by setting up a testing environment.
-
-### Import packages
-
-Import the Python packages needed for this tutorial.
--
-```python
-%matplotlib inline
-import numpy as np
-import matplotlib.pyplot as plt
-
-import azureml.core
-
-# Display the core SDK version number
-print("Azure ML SDK Version: ", azureml.core.VERSION)
-```
-
-## Deploy as web service
-
-Deploy the model as a web service hosted in ACI.
-
-To build the correct environment for ACI, provide the following:
-* A scoring script to show how to use the model
-* A configuration file to build the ACI
-* The model you trained before
-
-### Create scoring script
-
-Create the scoring script, called score.py, used by the web service call to show how to use the model.
-
-You must include two required functions into the scoring script:
-* The `init()` function, which typically loads the model into a global object. This function is run only once when the Docker container is started.
-
-* The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats are supported.
-
-```python
-%%writefile score.py
-import json
-import numpy as np
-import os
-import pickle
-import joblib
-
-def init():
- global model
- # AZUREML_MODEL_DIR is an environment variable created during deployment.
- # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
- # For multiple models, it points to the folder containing all deployed models (./azureml-models)
- model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_mnist_model.pkl')
- model = joblib.load(model_path)
-
-def run(raw_data):
- data = np.array(json.loads(raw_data)['data'])
- # make prediction
- y_hat = model.predict(data)
- # you can return any data type as long as it is JSON-serializable
- return y_hat.tolist()
-```
-
-### Create configuration file
-
-Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. While it depends on your model, the default of 1 core and 1 gigabyte of RAM is usually sufficient for many models. If you feel you need more later, you would have to recreate the image and redeploy the service.
--
-```python
-from azureml.core.webservice import AciWebservice
-
-aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
- memory_gb=1,
- tags={"data": "MNIST", "method" : "sklearn"},
- description='Predict MNIST with sklearn')
-```
-
-### Deploy in ACI
-Estimated time to complete: **about 2-5 minutes**
-
-Configure the image and deploy. The following code goes through these steps:
-
-1. Create environment object containing dependencies needed by the model using the environment (`tutorial-env`) saved during training.
-1. Create inference configuration necessary to deploy the model as a web service using:
- * The scoring file (`score.py`)
- * environment object created in previous step
-1. Deploy the model to the ACI container.
-1. Get the web service HTTP endpoint.
--
-```python
-%%time
-import uuid
-from azureml.core.webservice import Webservice
-from azureml.core.model import InferenceConfig
-from azureml.core.environment import Environment
-from azureml.core import Workspace
-from azureml.core.model import Model
-
-ws = Workspace.from_config()
-model = Model(ws, 'sklearn_mnist')
--
-myenv = Environment.get(workspace=ws, name="tutorial-env", version="1")
-inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
-
-service_name = 'sklearn-mnist-svc-' + str(uuid.uuid4())[:4]
-service = Model.deploy(workspace=ws,
- name=service_name,
- models=[model],
- inference_config=inference_config,
- deployment_config=aciconfig)
-
-service.wait_for_deployment(show_output=True)
-```
-
-Get the scoring web service's HTTP endpoint, which accepts REST client calls. This endpoint can be shared with anyone who wants to test the web service or integrate it into an application.
--
-```python
-print(service.scoring_uri)
-```
-
-## Test the model
--
-### Download test data
-Download the test data to the **./data/** directory
--
-```python
-import os
-from azureml.core import Dataset
-from azureml.opendatasets import MNIST
-
-data_folder = os.path.join(os.getcwd(), 'data')
-os.makedirs(data_folder, exist_ok=True)
-
-mnist_file_dataset = MNIST.get_file_dataset()
-mnist_file_dataset.download(data_folder, overwrite=True)
-```
-
-### Load test data
-
-Load the test data from the **./data/** directory created during the training tutorial.
--
-```python
-from utils import load_data
-import os
-import glob
-
-data_folder = os.path.join(os.getcwd(), 'data')
-# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster
-X_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-images-idx3-ubyte.gz"), recursive=True)[0], False) / 255.0
-y_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-labels-idx1-ubyte.gz"), recursive=True)[0], True).reshape(-1)
-```
-
-### Predict test data
-
-Feed the test dataset to the model to get predictions.
--
-The following code goes through these steps:
-1. Send the data as a JSON array to the web service hosted in ACI.
-
-1. Use the SDK's `run` API to invoke the service. You can also make raw calls using any HTTP tool such as curl.
--
-```python
-import json
-test = json.dumps({"data": X_test.tolist()})
-test = bytes(test, encoding='utf8')
-y_hat = service.run(input_data=test)
-```
-
-### Examine the confusion matrix
-
-Generate a confusion matrix to see how many samples from the test set are classified correctly. Notice the mis-classified value for the incorrect predictions.
--
-```python
-from sklearn.metrics import confusion_matrix
-
-conf_mx = confusion_matrix(y_test, y_hat)
-print(conf_mx)
-print('Overall accuracy:', np.average(y_hat == y_test))
-```
-
-The output shows the confusion matrix:
-
-```output
-[[ 960 0 1 2 1 5 6 3 1 1]
- [ 0 1112 3 1 0 1 5 1 12 0]
- [ 9 8 920 20 10 4 10 11 37 3]
- [ 4 0 17 921 2 21 4 12 20 9]
- [ 1 2 5 3 915 0 10 2 6 38]
- [ 10 2 0 41 10 770 17 7 28 7]
- [ 9 3 7 2 6 20 907 1 3 0]
- [ 2 7 22 5 8 1 1 950 5 27]
- [ 10 15 5 21 15 27 7 11 851 12]
- [ 7 8 2 13 32 13 0 24 12 898]]
-Overall accuracy: 0.9204
-```
-
-Use `matplotlib` to display the confusion matrix as a graph. In this graph, the X axis represents the actual values, and the Y axis represents the predicted values. The color in each grid represents the error rate. The lighter the color, the higher the error rate is. For example, many 5's are mis-classified as 3's. So you see a bright grid at (5,3).
-
-```python
-# normalize the diagonal cells so that they don't overpower the rest of the cells when visualized
-row_sums = conf_mx.sum(axis=1, keepdims=True)
-norm_conf_mx = conf_mx / row_sums
-np.fill_diagonal(norm_conf_mx, 0)
-
-fig = plt.figure(figsize=(8,5))
-ax = fig.add_subplot(111)
-cax = ax.matshow(norm_conf_mx, cmap=plt.cm.bone)
-ticks = np.arange(0, 10, 1)
-ax.set_xticks(ticks)
-ax.set_yticks(ticks)
-ax.set_xticklabels(ticks)
-ax.set_yticklabels(ticks)
-fig.colorbar(cax)
-plt.ylabel('true labels', fontsize=14)
-plt.xlabel('predicted values', fontsize=14)
-plt.savefig('conf.png')
-plt.show()
-```
-
-![Chart showing confusion matrix](./media/tutorial-deploy-models-with-aml/confusion.png)
--
-## Show predictions
-
-Test the deployed model with a random sample of 30 images from the test data.
--
-1. Print the returned predictions and plot them along with the input images. Red font and inverse image (white on black) is used to highlight the misclassified samples.
-
- Since the model accuracy is high, you might have to run the following code a few times before you can see a misclassified sample.
---
-```python
-import json
-
-# find 30 random samples from test set
-n = 30
-sample_indices = np.random.permutation(X_test.shape[0])[0:n]
-
-test_samples = json.dumps({"data": X_test[sample_indices].tolist()})
-test_samples = bytes(test_samples, encoding='utf8')
-
-# predict using the deployed model
-result = service.run(input_data=test_samples)
-
-# compare actual value vs. the predicted values:
-i = 0
-plt.figure(figsize = (20, 1))
-
-for s in sample_indices:
- plt.subplot(1, n, i + 1)
- plt.axhline('')
- plt.axvline('')
-
- # use different color for misclassified sample
- font_color = 'red' if y_test[s] != result[i] else 'black'
- clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys
-
- plt.text(x=10, y =-10, s=result[i], fontsize=18, color=font_color)
- plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map)
-
- i = i + 1
-plt.show()
-```
-
-You can also send raw HTTP request to test the web service.
--
-```python
-import requests
-
-# send a random row from the test set to score
-random_index = np.random.randint(0, len(X_test)-1)
-input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}"
-
-headers = {'Content-Type':'application/json'}
-
-# for AKS deployment you'd need to the service key in the header as well
-# api_key = service.get_key()
-# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
-
-resp = requests.post(service.scoring_uri, input_data, headers=headers)
-
-print("POST to url", service.scoring_uri)
-#print("input data:", input_data)
-print("label:", y_test[random_index])
-print("prediction:", resp.text)
-```
-
-## Clean up resources
-
-To keep the resource group and workspace for other tutorials and exploration, you can delete only the ACI deployment using this API call:
--
-```python
-service.delete()
-```
---
-## Next steps
-
-+ Learn about all of the [deployment options for Azure Machine Learning](how-to-deploy-and-where.md).
-+ Learn how to [create clients for the web service](how-to-consume-web-service.md).
-+ [Make predictions on large quantities of data](./tutorial-pipeline-batch-scoring-classification.md) asynchronously.
-+ Monitor your Azure Machine Learning models with [Application Insights](how-to-enable-app-insights.md).
-+ Try out the [automatic algorithm selection](tutorial-auto-train-models.md) tutorial.
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-train-deploy-notebook.md
+
+ Title: "Tutorial: Train and deploy an example in Jupyter Notebook"
+
+description: Use Azure Machine Learning to train and deploy an image classification model with scikit-learn in a cloud-based Python Jupyter Notebook.
++++++ Last updated : 01/05/2022+
+#Customer intent: As a professional data scientist, I can build an image classification model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Tutorial: Train and deploy an image classification model with an example Jupyter Notebook
+
+In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning in a Python Jupyter Notebook. You can then use the notebook as a template to train your own machine learning model with your own data.
+
+This tutorial trains a simple logistic regression by using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](https://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28 x 28 pixels, representing a number from zero to nine. The goal is to create a multi-class classifier to identify the digit a given image represents.
+
+Learn how to take the following actions:
+
+> [!div class="checklist"]
+> * Download a dataset and look at the data.
+> * Train an image classification model and log metrics using MLflow.
+> * Deploy the model to do real-time inference.
++
+## Prerequisites
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to:
+ * Create a workspace.
+ * Create a cloud-based compute instance to use for your development environment.
+
+## <a name="azure"></a>Run a notebook from your workspace
+
+Azure Machine Learning includes a cloud notebook server in your workspace for an install-free and pre-configured experience. Use [your own environment](how-to-configure-environment.md#local) if you prefer to have control over your environment, packages, and dependencies.
+
+ Follow along with this video or use the detailed steps to clone and run the tutorial notebook from your workspace.
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4mTUr]
+
+> [!NOTE]
+> The video helps you understand the process, but shows opening a different file. For this tutorial, once you've cloned the **tutorials** folder, use [instructions below to open the cloned notebook](#open).
+
+## <a name="clone"></a> Clone a notebook folder
+
+You complete the following experiment setup and run steps in Azure Machine Learning studio. This consolidated interface includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels.
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
+
+1. Select your subscription and the workspace you created.
+
+1. On the left, select **Notebooks**.
+
+1. At the top, select the **Samples** tab.
+
+1. Open the **Python** folder.
+
+1. Open the folder with a version number on it. This number represents the current release for the Python SDK.
+
+1. Select the **...** button at the right of the **tutorials** folder, and then select **Clone**.
+
+ :::image type="content" source="media/tutorial-1st-experiment-sdk-setup/clone-tutorials.png" alt-text="Screenshot that shows the Clone tutorials folder.":::
+
+1. A list of folders shows each user who accesses the workspace. Select your folder to clone the **tutorials** folder there.
+
+## <a name="open"></a> Open the cloned notebook
+
+1. Open the **tutorials** folder that was cloned into your **User files** section.
+
+ > [!IMPORTANT]
+ > You can view notebooks in the **samples** folder but you can't run a notebook from there. To run a notebook, make sure you open the cloned version of the notebook in the **User Files** section.
+
+1. Select the **quickstart-azureml-in-10mins.ipynb** file from your **compute-instance-quickstarts/quickstart-azureml-in-10mins** folder.
+
+ :::image type="content" source="media/tutorial-train-deploy-notebook/expand-folder.png" alt-text="Screenshot shows the Open tutorials folder.":::
+
+1. On the top bar, select the compute instance you created during the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to use for running the notebook.
+
+This tutorial and accompanying **utils.py** file is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to use it on your own [local environment](how-to-configure-environment.md#local). If you aren't using the compute instance, run `pip install azureml-sdk[notebooks] azureml-opendatasets matplotlib` to install dependencies for this tutorial.
+
+> [!Important]
+> The rest of this article contains the same content as you see in the notebook.
+>
+> Switch to the Jupyter Notebook now if you want to run the code while you read along.
+> To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar.
+
+## Import data
+
+Before you train a model, you need to understand the data you're using to train it. In this section, learn how to:
+
+* Download the MNIST dataset
+* Display some sample images
+
+You'll use Azure Open Datasets to get the raw MNIST data files. Azure Open Datasets are curated public datasets that you can use to add scenario-specific features to machine learning solutions for better models. Each dataset has a corresponding class, `MNIST` in this case, to retrieve the data in different ways.
++
+```python
+import os
+from azureml.opendatasets import MNIST
+
+data_folder = os.path.join(os.getcwd(), "/tmp/qs_data")
+os.makedirs(data_folder, exist_ok=True)
+
+mnist_file_dataset = MNIST.get_file_dataset()
+mnist_file_dataset.download(data_folder, overwrite=True)
+```
+
+### Take a look at the data
+
+Load the compressed files into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them.
+
+Note this step requires a `load_data` function that's included in an `utils.py` file. This file is placed in the same folder as this notebook. The `load_data` function simply parses the compressed files into numpy arrays.
++
+```python
+from utils import load_data
+import matplotlib.pyplot as plt
+import numpy as np
+import glob
++
+# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.
+X_train = (
+ load_data(
+ glob.glob(
+ os.path.join(data_folder, "**/train-images-idx3-ubyte.gz"), recursive=True
+ )[0],
+ False,
+ )
+ / 255.0
+)
+X_test = (
+ load_data(
+ glob.glob(
+ os.path.join(data_folder, "**/t10k-images-idx3-ubyte.gz"), recursive=True
+ )[0],
+ False,
+ )
+ / 255.0
+)
+y_train = load_data(
+ glob.glob(
+ os.path.join(data_folder, "**/train-labels-idx1-ubyte.gz"), recursive=True
+ )[0],
+ True,
+).reshape(-1)
+y_test = load_data(
+ glob.glob(
+ os.path.join(data_folder, "**/t10k-labels-idx1-ubyte.gz"), recursive=True
+ )[0],
+ True,
+).reshape(-1)
++
+# now let's show some randomly chosen images from the traininng set.
+count = 0
+sample_size = 30
+plt.figure(figsize=(16, 6))
+for i in np.random.permutation(X_train.shape[0])[:sample_size]:
+ count = count + 1
+ plt.subplot(1, sample_size, count)
+ plt.axhline("")
+ plt.axvline("")
+ plt.text(x=10, y=-10, s=y_train[i], fontsize=18)
+ plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys)
+plt.show()
+```
+The code above displays a random set of images with their labels, similar to this:
++
+## Train model and log metrics with MLflow
+
+You'll train the model using the code below. Note that you are using MLflow autologging to track metrics and log model artifacts.
+
+You'll be using the [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) classifier from the [SciKit Learn framework](https://scikit-learn.org/) to classify the data.
+
+> [!NOTE]
+> The model training takes approximately 2 minutes to complete.**
++
+```python
+# create the model
+import mlflow
+import numpy as np
+from sklearn.linear_model import LogisticRegression
+from azureml.core import Workspace
+
+# connect to your workspace
+ws = Workspace.from_config()
+
+# create experiment and start logging to a new run in the experiment
+experiment_name = "azure-ml-in10-mins-tutorial"
+
+# set up MLflow to track the metrics
+mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
+mlflow.set_experiment(experiment_name)
+mlflow.autolog()
+
+# set up the Logistic regression model
+reg = 0.5
+clf = LogisticRegression(
+ C=1.0 / reg, solver="liblinear", multi_class="auto", random_state=42
+)
+
+# train the model
+with mlflow.start_run() as run:
+ clf.fit(X_train, y_train)
+```
+
+## View experiment
+
+In the left-hand menu in Azure Machine Learning Studio, select __Experiments__ and then select your experiment (__azure-ml-in10-mins-tutorial__). An experiment is a grouping of many runs from a specified script or piece of code. Information for the run is stored under that experiment. If the name doesn't exist when you submit an experiment, if you select your run you will see various tabs containing metrics, logs, explanations, etc.
+
+## Version control your models with the model registry
+
+You can use model registration to store and version your models in your workspace. Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. The code below registers and versions the model you trained above. Once you have executed the code cell below you will be able to see the model in the registry by selecting __Models__ in the left-hand menu in Azure Machine Learning Studio.
+
+```python
+# register the model
+model_uri = "runs:/{}/model".format(run.info.run_id)
+model = mlflow.register_model(model_uri, "sklearn_mnist_model")
+```
+
+## Deploy the model for real-time inference
+
+In this section you learn how to deploy a model so that an application can consume (inference) the model over REST.
+
+### Create deployment configuration
+
+The code cell gets a _curated environment_, which specifies all the dependencies required to host the model (for example, the packages like scikit-learn). Also, you create a _deployment configuration_, which specifies the amount of compute required to host the model. In this case, the compute will have 1CPU and 1GB memory.
++
+```python
+# create environment for the deploy
+from azureml.core.environment import Environment
+from azureml.core.conda_dependencies import CondaDependencies
+from azureml.core.webservice import AciWebservice
+
+# get a curated environment
+env = Environment.get(
+ workspace=ws,
+ name="AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference",
+ version=1
+)
+env.inferencing_stack_version='latest'
+
+# create deployment config i.e. compute resources
+aciconfig = AciWebservice.deploy_configuration(
+ cpu_cores=1,
+ memory_gb=1,
+ tags={"data": "MNIST", "method": "sklearn"},
+ description="Predict MNIST with sklearn",
+)
+```
+
+### Deploy model
+
+This next code cell deploys the model to Azure Container Instance.
+
+> [!NOTE]
+> The deployment takes approximately 3 minutes to complete.**
++
+```python
+%%time
+import uuid
+from azureml.core.model import InferenceConfig
+from azureml.core.environment import Environment
+from azureml.core.model import Model
+
+# get the registered model
+model = Model(ws, "sklearn_mnist_model")
+
+# create an inference config i.e. the scoring script and environment
+inference_config = InferenceConfig(entry_script="score.py", environment=env)
+
+# deploy the service
+service_name = "sklearn-mnist-svc-" + str(uuid.uuid4())[:4]
+service = Model.deploy(
+ workspace=ws,
+ name=service_name,
+ models=[model],
+ inference_config=inference_config,
+ deployment_config=aciconfig,
+)
+
+service.wait_for_deployment(show_output=True)
+```
+
+The scoring script file referenced in the code above can be found in the same folder as this notebook, and has two functions:
+
+1. An `init` function that executes once when the service starts - in this function you normally get the model from the registry and set global variables
+1. A `run(data)` function that executes each time a call is made to the service. In this function, you normally format the input data, run a prediction, and output the predicted result.
+
+### View endpoint
+
+Once the model has been successfully deployed, you can view the endpoint by navigating to __Endpoints__ in the left-hand menu in Azure Machine Learning Studio. You will be able to see the state of the endpoint (healthy/unhealthy), logs, and consume (how applications can consume the model).
+
+## Test the model service
+
+You can test the model by sending a raw HTTP request to test the web service.
++
+```python
+# send raw HTTP request to test the web service.
+import requests
+
+# send a random row from the test set to score
+random_index = np.random.randint(0, len(X_test) - 1)
+input_data = '{"data": [' + str(list(X_test[random_index])) + "]}"
+
+headers = {"Content-Type": "application/json"}
+
+resp = requests.post(service.scoring_uri, input_data, headers=headers)
+
+print("POST to url", service.scoring_uri)
+print("label:", y_test[random_index])
+print("prediction:", resp.text)
+```
+
+## Clean up resources
+
+If you're not going to continue to use this model, delete the Model service using:
+
+```python
+# if you want to keep workspace and only delete endpoint (it will incur cost while running)
+service.delete()
+```
+
+If you want to control cost further, stop the compute instance by selecting the "Stop compute" button next to the **Compute** dropdown. Then start the compute instance again the next time you need it.
+
+### Delete everything
+
+Use these steps to delete your Azure Machine Learning workspace and all compute resources.
+++
+## Next steps
+++ Learn about all of the [deployment options for Azure Machine Learning](how-to-deploy-and-where.md).++ Learn how to [create clients for the web service](how-to-consume-web-service.md).++ [Make predictions on large quantities of data](./tutorial-pipeline-batch-scoring-classification.md) asynchronously.++ Monitor your Azure Machine Learning models with [Application Insights](how-to-enable-app-insights.md).++ Try out the [automatic algorithm selection](tutorial-auto-train-models.md) tutorial.
machine-learning Tutorial Train Models With Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-train-models-with-aml.md
- Title: "Tutorial: Train an example Jupyter Notebook"-
-description: Use Azure Machine Learning to train an image classification model with scikit-learn in a cloud-based Python Jupyter Notebook. This tutorial is part one of two.
------- Previously updated : 10/21/2021-
-#Customer intent: As a professional data scientist, I can build an image classification model with Azure Machine Learning by using Python in a Jupyter Notebook.
--
-# Tutorial: Train an image classification model with an example Jupyter Notebook
-
-In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning in a Python Jupyter Notebook. You can then use the notebook as a template to train your own machine learning model with your own data. This tutorial is **part one of a two-part tutorial series**.
-
-This tutorial trains a simple logistic regression by using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](https://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28 x 28 pixels, representing a number from zero to nine. The goal is to create a multi-class classifier to identify the digit a given image represents.
-
-Learn how to take the following actions:
-
-> [!div class="checklist"]
-> * Set up your development environment.
-> * Access and examine the data.
-> * Train a simple logistic regression model on a remote cluster.
-> * Review training results and register the best model.
-
-You learn how to select a model and deploy it in [part two of this tutorial](tutorial-deploy-models-with-aml.md).
-
-If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-
-> [!NOTE]
-> Code in this article was tested with [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro) version 1.13.0.
-
-## Prerequisites
-
-* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to:
- * Create a workspace.
- * Create a cloud-based compute instance to use for your development environment.
- * Create a cloud-based compute cluster to use for training your model.
-
-## <a name="azure"></a>Run a notebook from your workspace
-
-Azure Machine Learning includes a cloud notebook server in your workspace for an install-free and pre-configured experience. Use [your own environment](how-to-configure-environment.md#local) if you prefer to have control over your environment, packages, and dependencies.
-
- Follow along with this video or use the detailed steps to clone and run the tutorial notebook from your workspace.
-
-> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4mTUr]
-
-> [!NOTE]
-> The video helps you understand the process, but shows opening a different file. For this tutorial, once you've cloned the **tutorials** folder, open the **img-classification-part1-training.ipynb** file from your **tutorials/image-classification-mnist-data** folder.
-
-### <a name="clone"></a> Clone a notebook folder
-
-You complete the following experiment setup and run steps in Azure Machine Learning studio. This consolidated interface includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels.
-
-1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
-
-1. Select your subscription and the workspace you created.
-
-1. On the left, select **Notebooks**.
-
-1. At the top, select the **Samples** tab.
-
-1. Open the **Python** folder.
-
-1. Open the folder with a version number on it. This number represents the current release for the Python SDK.
-
-1. Select the **...** button at the right of the **tutorials** folder, and then select **Clone**.
-
- :::image type="content" source="media/tutorial-1st-experiment-sdk-setup/clone-tutorials.png" alt-text="Screenshot that shows the Clone tutorials folder.":::
-
-1. A list of folders shows each user who accesses the workspace. Select your folder to clone the **tutorials** folder there.
-
-### <a name="open"></a> Open the cloned notebook
-
-1. Open the **tutorials** folder that was cloned into your **User files** section.
-
- > [!IMPORTANT]
- > You can view notebooks in the **samples** folder but you can't run a notebook from there. To run a notebook, make sure you open the cloned version of the notebook in the **User Files** section.
-
-1. Select the **img-classification-part1-training.ipynb** file in your **tutorials/image-classification-mnist-data** folder.
-
- :::image type="content" source="media/tutorial-1st-experiment-sdk-setup/expand-user-folder.png" alt-text="Screenshot that shows the Open tutorials folder.":::
-
-1. On the top bar, select your compute instance to use to run the notebook.
--
-The tutorial and accompanying **utils.py** file is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to use it on your own [local environment](how-to-configure-environment.md#local). If you aren't using the compute instance, run `pip install azureml-sdk[notebooks] azureml-opendatasets matplotlib` to install dependencies for this tutorial.
-
-> [!Important]
-> The rest of this article contains the same content as you see in the notebook.
->
-> Switch to the Jupyter Notebook now if you want to read along as you run the code.
-> To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar.
-
-## <a name="start"></a>Set up your development environment
-
-All the setup for your development work can be accomplished in a Python notebook. Setup includes the following actions:
-
-* Import Python packages.
-* Connect to a workspace, so that your local computer can communicate with remote resources.
-* Create an experiment to track all your runs.
-* Create a remote compute target to use for training.
-
-### Import packages
-
-Import Python packages you need in this session. Also display the Azure Machine Learning SDK version:
-
-```python
-%matplotlib inline
-import numpy as np
-import matplotlib.pyplot as plt
-
-import azureml.core
-from azureml.core import Workspace
-
-# check core SDK version number
-print("Azure ML SDK Version: ", azureml.core.VERSION)
-```
-
-### Connect to a workspace
-
-Create a workspace object from the existing workspace. `Workspace.from_config()` reads the file **config.json** and loads the details into an object named `ws`. The compute instance has a copy of this file saved in its root directory. If you run the code elsewhere, you'll need to [create the file](how-to-configure-environment.md#workspace).
-
-```python
-# load workspace configuration from the config.json file in the current folder.
-ws = Workspace.from_config()
-print(ws.name, ws.location, ws.resource_group, sep='\t')
-```
-
->[!NOTE]
-> You may be asked to authenticate to your workspace the first time you run the following code. Follow the on-screen instructions.
-
-### Create an experiment
-
-Create an experiment to track the runs in your workspace. A workspace can have multiple experiments:
-
-```python
-from azureml.core import Experiment
-experiment_name = 'Tutorial-sklearn-mnist'
-
-exp = Experiment(workspace=ws, name=experiment_name)
-```
-
-### Create or attach an existing compute target
-
-By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. You will submit Python code to run on this VM later in the tutorial.
-
-The code below creates the compute clusters for you if they don't already exist in your workspace. It sets up a cluster that will scale down to 0 when not in use, and can scale up to a maximum of 4 nodes.
-
- **Creation of the compute target takes about five minutes.** If the compute resource is already in the workspace, the code uses it and skips the creation process.
-
-> [!TIP]
-> If you created a compute cluster in the quickstart, make sure `compute_name` in the code below uses the same name.
-
-```python
-from azureml.core.compute import AmlCompute
-from azureml.core.compute import ComputeTarget
-import os
-
-# choose a name for your cluster
-compute_name = os.environ.get("AML_COMPUTE_CLUSTER_NAME", "cpu-cluster")
-compute_min_nodes = os.environ.get("AML_COMPUTE_CLUSTER_MIN_NODES", 0)
-compute_max_nodes = os.environ.get("AML_COMPUTE_CLUSTER_MAX_NODES", 4)
-
-# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
-vm_size = os.environ.get("AML_COMPUTE_CLUSTER_SKU", "STANDARD_D2_V2")
--
-if compute_name in ws.compute_targets:
- compute_target = ws.compute_targets[compute_name]
- if compute_target and type(compute_target) is AmlCompute:
- print('found compute target. just use it. ' + compute_name)
-else:
- print('creating a new compute target...')
- provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size,
- min_nodes=compute_min_nodes,
- max_nodes=compute_max_nodes)
-
- # create the cluster
- compute_target = ComputeTarget.create(
- ws, compute_name, provisioning_config)
-
- # can poll for a minimum number of nodes and for a specific timeout.
- # if no min node count is provided it will use the scale settings for the cluster
- compute_target.wait_for_completion(
- show_output=True, min_node_count=None, timeout_in_minutes=20)
-
- # For a more detailed view of current AmlCompute status, use get_status()
- print(compute_target.get_status().serialize())
-```
-
-You now have the necessary packages and compute resources to train a model in the cloud.
-
-## Explore data
-
-Before you train a model, you need to understand the data that you use to train it. In this section you learn how to:
-
-* Download the MNIST dataset.
-* Display some sample images.
-
-### Download the MNIST dataset
-
-Use Azure Open Datasets to get the raw MNIST data files. [Azure Open Datasets](../open-datasets/overview-what-are-open-datasets.md) are curated public datasets that you can use to add scenario-specific features to machine learning solutions for more accurate models. Each dataset has a corresponding class, `MNIST` in this case, to retrieve the data in different ways.
-
-This code retrieves the data as a `FileDataset` object, which is a subclass of `Dataset`. A `FileDataset` references single or multiple files of any format in your datastores or public urls. The class provides you with the ability to download or mount the files to your compute by creating a reference to the data source location. Additionally, you register the Dataset to your workspace for easy retrieval during training.
-
-Follow the [how-to](how-to-create-register-datasets.md) to learn more about Datasets and their usage in the SDK.
-
-```python
-from azureml.core import Dataset
-from azureml.opendatasets import MNIST
-
-data_folder = os.path.join(os.getcwd(), 'data')
-os.makedirs(data_folder, exist_ok=True)
-
-mnist_file_dataset = MNIST.get_file_dataset()
-mnist_file_dataset.download(data_folder, overwrite=True)
-
-mnist_file_dataset = mnist_file_dataset.register(workspace=ws,
- name='mnist_opendataset',
- description='training and test dataset',
- create_new_version=True)
-```
-
-### Display some sample images
-
-Load the compressed files into `numpy` arrays. Then use `matplotlib` to plot 30 random images from the dataset with their labels above them. This step requires a `load_data` function that's included in an `utils.py` file. This file is included in the sample folder. Make sure it's placed in the same folder as this notebook. The `load_data` function simply parses the compressed files into numpy arrays.
-
-```python
-# make sure utils.py is in the same directory as this code
-from utils import load_data
-import glob
--
-# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the model converge faster.
-X_train = load_data(glob.glob(os.path.join(data_folder,"**/train-images-idx3-ubyte.gz"), recursive=True)[0], False) / 255.0
-X_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-images-idx3-ubyte.gz"), recursive=True)[0], False) / 255.0
-y_train = load_data(glob.glob(os.path.join(data_folder,"**/train-labels-idx1-ubyte.gz"), recursive=True)[0], True).reshape(-1)
-y_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-labels-idx1-ubyte.gz"), recursive=True)[0], True).reshape(-1)
--
-# now let's show some randomly chosen images from the traininng set.
-count = 0
-sample_size = 30
-plt.figure(figsize=(16, 6))
-for i in np.random.permutation(X_train.shape[0])[:sample_size]:
- count = count + 1
- plt.subplot(1, sample_size, count)
- plt.axhline('')
- plt.axvline('')
- plt.text(x=10, y=-10, s=y_train[i], fontsize=18)
- plt.imshow(X_train[i].reshape(28, 28), cmap=plt.cm.Greys)
-plt.show()
-```
-
-A random sample of images displays:
-
-![Random sample of images](./media/tutorial-train-models-with-aml/digits.png)
-
-Now you have an idea of what these images look like and the expected prediction outcome.
-
-## Train on a remote cluster
-
-For this task, you submit the job to run on the remote training cluster you set up earlier. To submit a job you:
-* Create a directory
-* Create a training script
-* Create a script run configuration
-* Submit the job
-
-### Create a directory
-
-Create a directory to deliver the necessary code from your computer to the remote resource.
-
-```python
-import os
-script_folder = os.path.join(os.getcwd(), "sklearn-mnist")
-os.makedirs(script_folder, exist_ok=True)
-```
-
-### Create a training script
-
-To submit the job to the cluster, first create a training script. Run the following code to create the training script called `train.py` in the directory you just created.
-
-```python
-%%writefile $script_folder/train.py
-
-import argparse
-import os
-import numpy as np
-import glob
-
-from sklearn.linear_model import LogisticRegression
-import joblib
-
-from azureml.core import Run
-from utils import load_data
-
-# let user feed in 2 parameters, the dataset to mount or download, and the regularization rate of the logistic regression model
-parser = argparse.ArgumentParser()
-parser.add_argument('--data-folder', type=str, dest='data_folder', help='data folder mounting point')
-parser.add_argument('--regularization', type=float, dest='reg', default=0.01, help='regularization rate')
-args = parser.parse_args()
-
-data_folder = args.data_folder
-print('Data folder:', data_folder)
-
-# load train and test set into numpy arrays
-# note we scale the pixel intensity values to 0-1 (by dividing it with 255.0) so the model can converge faster.
-X_train = load_data(glob.glob(os.path.join(data_folder, '**/train-images-idx3-ubyte.gz'), recursive=True)[0], False) / 255.0
-X_test = load_data(glob.glob(os.path.join(data_folder, '**/t10k-images-idx3-ubyte.gz'), recursive=True)[0], False) / 255.0
-y_train = load_data(glob.glob(os.path.join(data_folder, '**/train-labels-idx1-ubyte.gz'), recursive=True)[0], True).reshape(-1)
-y_test = load_data(glob.glob(os.path.join(data_folder, '**/t10k-labels-idx1-ubyte.gz'), recursive=True)[0], True).reshape(-1)
-
-print(X_train.shape, y_train.shape, X_test.shape, y_test.shape, sep = '\n')
-
-# get hold of the current run
-run = Run.get_context()
-
-print('Train a logistic regression model with regularization rate of', args.reg)
-clf = LogisticRegression(C=1.0/args.reg, solver="liblinear", multi_class="auto", random_state=42)
-clf.fit(X_train, y_train)
-
-print('Predict the test set')
-y_hat = clf.predict(X_test)
-
-# calculate accuracy on the prediction
-acc = np.average(y_hat == y_test)
-print('Accuracy is', acc)
-
-run.log('regularization rate', np.float(args.reg))
-run.log('accuracy', np.float(acc))
-
-os.makedirs('outputs', exist_ok=True)
-# note file saved in the outputs folder is automatically uploaded into experiment record
-joblib.dump(value=clf, filename='outputs/sklearn_mnist_model.pkl')
-```
-
-Notice how the script gets data and saves models:
--- The training script reads an argument to find the directory that contains the data. When you submit the job later, you point to the datastore for this argument:-
- `parser.add_argument('--data-folder', type=str, dest='data_folder', help='data directory mounting point')`
--- The training script saves your model into a directory named **outputs**. Anything written in this directory is automatically uploaded into your workspace. You access your model from this directory later in the tutorial. `joblib.dump(value=clf, filename='outputs/sklearn_mnist_model.pkl')`--- The training script requires the file `utils.py` to load the dataset correctly. The following code copies `utils.py` into `script_folder` so that the file can be accessed along with the training script on the remote resource.-
- ```python
- import shutil
- shutil.copy('utils.py', script_folder)
- ```
-
-### Configure the training job
-
-Create a [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) object to specify the configuration details of your training job, including your training script, environment to use, and the compute target to run on. Configure the ScriptRunConfig by specifying:
-
-* The directory that contains your scripts. All the files in this directory are uploaded into the cluster nodes for execution.
-* The compute target. In this case, you use the Azure Machine Learning compute cluster you created.
-* The training script name, **train.py**.
-* An environment that contains the libraries needed to run the script.
-* Arguments required from the training script.
-
-In this tutorial, this target is AmlCompute. All files in the script folder are uploaded into the cluster nodes for run. The **--data_folder** is set to use the dataset.
-
-First, create the environment that contains: the scikit-learn library, azureml-dataset-runtime required for accessing the dataset, and azureml-defaults which contains the dependencies for logging metrics. The azureml-defaults also contains the dependencies required for deploying the model as a web service later in the part 2 of the tutorial.
-
-Once the environment is defined, register it with the Workspace to re-use it in part 2 of the tutorial.
-
-```python
-from azureml.core.environment import Environment
-from azureml.core.conda_dependencies import CondaDependencies
-
-# to install required packages
-env = Environment('tutorial-env')
-cd = CondaDependencies.create(pip_packages=['azureml-dataset-runtime[pandas,fuse]', 'azureml-defaults'], conda_packages=['scikit-learn==0.22.1'])
-
-env.python.conda_dependencies = cd
-
-# Register environment to re-use later
-env.register(workspace=ws)
-```
-
-Then, create the ScriptRunConfig by specifying the training script, compute target and environment.
-
-```python
-from azureml.core import ScriptRunConfig
-
-args = ['--data-folder', mnist_file_dataset.as_mount(), '--regularization', 0.5]
-
-src = ScriptRunConfig(source_directory=script_folder,
- script='train.py',
- arguments=args,
- compute_target=compute_target,
- environment=env)
-```
-
-### Submit the job to the cluster
-
-Run the experiment by submitting the ScriptRunConfig object:
-
-```python
-run = exp.submit(config=src)
-run
-```
-
-Because the call is asynchronous, it returns a **Preparing** or **Running** state as soon as the job is started.
-
-## Monitor a remote run
-
-In total, the first run takes **about 10 minutes**. But for subsequent runs, as long as the script dependencies don't change, the same image is reused. So the container startup time is much faster.
-
-What happens while you wait:
--- **Image creation**: A Docker image is created that matches the Python environment specified by the Azure ML environment. The image is uploaded to the workspace. Image creation and uploading takes **about five minutes**.-
- This stage happens once for each Python environment because the container is cached for subsequent runs. During image creation, logs are streamed to the run history. You can monitor the image creation progress by using these logs.
--- **Scaling**: If the remote cluster requires more nodes to do the run than currently available, additional nodes are added automatically. Scaling typically takes **about five minutes.**--- **Running**: In this stage, the necessary scripts and files are sent to the compute target. Then datastores are mounted or copied. And then the **entry_script** is run. While the job is running, **stdout** and the **./logs** directory are streamed to the run history. You can monitor the run's progress by using these logs.--- **Post-processing**: The **./outputs** directory of the run is copied over to the run history in your workspace, so you can access these results.-
-You can check the progress of a running job in several ways. This tutorial uses a Jupyter widget and a `wait_for_completion` method.
-
-### Jupyter widget
-
-Watch the progress of the run with a [Jupyter widget](/python/api/azureml-widgets/azureml.widgets). Like the run submission, the widget is asynchronous and provides live updates every 10 to 15 seconds until the job finishes:
-
-```python
-from azureml.widgets import RunDetails
-RunDetails(run).show()
-```
-
-The widget will look like the following at the end of training:
-
-![Notebook widget](./media/tutorial-train-models-with-aml/widget.png)
-
-If you need to cancel a run, you can follow [these instructions](./how-to-track-monitor-analyze-runs.md).
-
-### Get log results upon completion
-
-Model training and monitoring happen in the background. Wait until the model has finished training before you run more code. Use `wait_for_completion` to show when the model training is finished:
-
-```python
-run.wait_for_completion(show_output=False) # specify True for a verbose log
-```
-
-### Display run results
-
-You now have a model trained on a remote cluster. Retrieve the accuracy of the model:
-
-```python
-print(run.get_metrics())
-```
-
-The output shows the remote model has accuracy of 0.9204:
-
-`{'regularization rate': 0.8, 'accuracy': 0.9204}`
-
-In the next tutorial, you explore this model in more detail.
-
-## Register model
-
-The last step in the training script wrote the file `outputs/sklearn_mnist_model.pkl` in a directory named `outputs` in the VM of the cluster where the job is run. `outputs` is a special directory in that all content in this directory is automatically uploaded to your workspace. This content appears in the run record in the experiment under your workspace. So the model file is now also available in your workspace.
-
-You can see files associated with that run:
-
-```python
-print(run.get_file_names())
-```
-
-Register the model in the workspace, so that you or other collaborators can later query, examine, and deploy this model:
-
-```python
-# register model
-model = run.register_model(model_name='sklearn_mnist',
- model_path='outputs/sklearn_mnist_model.pkl')
-print(model.name, model.id, model.version, sep='\t')
-```
-
-## Clean up resources
--
-You can also delete just the Azure Machine Learning Compute cluster. However, autoscale is turned on, and the cluster minimum is zero. So this particular resource won't incur additional compute charges when not in use:
-
-```python
-# Optionally, delete the Azure Machine Learning Compute cluster
-compute_target.delete()
-```
-
-## Next steps
-
-In this Azure Machine Learning tutorial, you used Python for the following tasks:
-
-> [!div class="checklist"]
-> * Set up your development environment.
-> * Access and examine the data.
-> * Train multiple models on a remote cluster using the popular scikit-learn machine learning library
-> * Review training details and register the best model.
-
-You're ready to deploy this registered model by using the instructions in the next part of the tutorial series:
-
-> [!div class="nextstepaction"]
-> [Tutorial 2 - Deploy models](tutorial-deploy-models-with-aml.md)
marketplace Azure Consumption Commitment Enrollment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-consumption-commitment-enrollment.md
An offer must meet the following requirements to be enrolled in the MACC program
## How to see if your offer is enrolled in the MACC program - 1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home). 1. On the Home page, select the **Marketplace offers** tile.
marketplace Create Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-account.md
You can then assign the appropriate user roles and permissions to your users, so
>[!NOTE] > You must have an **account admin** or a **global admin** role to sign in to Microsoft Partner Network.
-1. In the top-right, select **Settings** > **Account settings** > **Programs**.
+1. In the top-right, select **Settings** > **Account settings**. Then in the left menu, select **Programs**.
1. Under **Commercial Marketplace**, select **Get Started**.
You've now created a commercial marketplace account in Partner Center. Continue
When you register for the commercial marketplace program with the same account as your developer program, you can see all of your existing program enrollments in the left pane of Partner Center. 1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165507) with your existing account.
-1. In the top-right, select **Settings** > **Account settings** > **Programs**.
+1. In the top-right, select **Settings** > **Account settings**. Then in the left menu, select **Programs**.
1. Under **Commercial Marketplace**, select **Get Started**, and enter your work email address. 1. Complete the details on the **Publisher profile** page. 1. Read the terms and conditions in the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement), and then select **Accept and continue** to complete your enrollment.
The publishing agreement and company profile information transition to your new
- [Add new publishers to the commercial marketplace](add-publishers.md) - [Manage your commercial marketplace account in Partner Center](manage-account.md)-- [What is the Microsoft commercial marketplace?](overview.md)
+- [What is the Microsoft commercial marketplace?](overview.md)
marketplace Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/manage-tenants.md
Any user with the Manager role in the Partner Center account will have the optio
To associate another Azure AD tenant with your Partner Center account:
-1. In the top-right of Partner Center, select **Settings** > **Account settings**.
+1. In the upper-right of Partner Center, select **Settings** > **Account settings**.
1. Under **Organization profile**, select **Tenants**. The current tenant associations are shown. 1. On the **Developer** tab, select **Associate**. 1. Enter your Azure AD credentials for the tenant that you want to associate.
If the association is successful, you will then be ready to add and manage accou
To create a brand new Azure AD tenant with your Partner Center account:
-1. In the top-right of Partner Center, select **Settings** > **Account settings**.
+1. In the upper-right of Partner Center, select **Settings** > **Account settings**.
1. Under **Organization profile**, select **Tenants**. The current tenant associations are shown. 1. On the Developer tab, select **Create**. 1. Enter the directory information for your new Azure AD:
For more information about creating new tenants inside your Azure portal, instea
To remove a tenant from your Partner Center account, find its name on the **Tenants** page (in **Account settings**), then select **Remove**. You'll be prompted to confirm that you want to remove the tenant. After you do so, no users in that tenant will be able to sign into the Partner Center account, and any permissions you have configured for those users will be removed. > [!TIP]
-> You can't remove a tenant if you are currently signed into Partner Center using an account in the same tenant. To remove a tenant, you must sign into Partner Center as a **Manager** for another tenant that is associated with the account. If there is only one tenant associated with the account, that tenant can only be removed after signing in with the Microsoft account that opened the account.
+> You can't remove a tenant if you are currently signed into Partner Center using an account in the same tenant. To remove a tenant, you must sign into Partner Center as a **Manager** for another tenant that is associated with the account. If there is only one tenant associated with the account, that tenant can only be removed after signing in with the Microsoft account that opened the account.
media-services Encode Basic Encoding Python Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/encode-basic-encoding-python-quickstart.md
ms.devlang: python Previously updated : 7/2/2021 Last updated : 1/10/2022
This quickstart shows you how to do basic encoding with Python and Azure Media S
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Create a resource group to use with this quickstart.-- Create a Media Services v3 account.-- Get your storage account key.-- Create a service principal and key.
+- [Create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal#create-resource-groups) to use with this quickstart.
+- [Create a Media Services v3 account](account-create-how-to.md).
+- [Get your storage account key](/azure/storage/common/storage-account-keys-manage#view-account-access-keys).
+- [Create a service principal and key](/azure/purview/create-service-principal-azure).
## Get the sample
Create a fork and clone the sample located in the [Python samples repository](ht
## Create the .env file
-Get the values from your account to create an *.env* file. That's right, save it with no name, just the extension. Use *sample.env* as a template then save the *.env* file to the BasicEncoder folder in your local clone.
+Get the values from your account to create an *.env* file. That is correct, save it with no name, just the extension. Use *sample.env* as a template then save the *.env* file to the BasicEncoder folder in your local clone.
## Use Python virtual environments
-For samples, we recommend that you always create and activate a Python virtual environment using the following steps:
-1. Open the sample folder in VSCode or other editor
-2. Create the virtual environment
+For samples, we recommend you always create and activate a Python virtual environment using the following steps:
+
+1. Open the sample folder in VSCode or other editor.
+2. Create a virtual environment.
``` bash # py -3 uses the global python interpreter. You can also use python -m venv .venv. py -3 -m venv .venv ```
- This command runs the Python venv module and creates a virtual environment in a folder named .venv.
+ This command runs the Python `venv` module and creates a virtual environment in a folder named *.venv*.
3. Activate the virtual environment:
For samples, we recommend that you always create and activate a Python virtual e
.venv\scripts\activate ```
- A virtual environment is a folder within a project that isolates a copy of a specific Python interpreter. Once you activate that environment (which Visual Studio Code does automatically), running pip install installs a library into that environment only. When you then run your Python code, it runs in the environment's exact context with specific versions of every library. And when you run pip freeze, you get the exact list of the those libraries. (In many of the samples, you create a requirements.txt file for the libraries you need, then use pip install -r requirements.txt. A requirements file is generally needed when you deploy code to Azure.)
+ A virtual environment is a folder within a project that isolates a copy of a specific Python interpreter. Once you activate that environment (which Visual Studio Code does automatically), running `pip install` installs a library into that environment only. When you then run your Python code, it runs in the environment's exact context with specific versions of every library. And when you run `pip freeze`, you get the exact list of those libraries. (In many of the samples, you create a requirements.txt file for the libraries you need, then use `pip install -r requirements.txt`. A requirements file is usually needed when you deploy code to Azure.)
## Set up
-Set up and [configure your local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment)
+1. Set up and [configure your local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment).
-Install the azure-identity library for Python. This module is needed for Azure Active Directory authentication. See the details at [Azure Identity client library for Python](/python/api/overview/azure/identity-readme#environment-variables)
+2. Install the azure-identity library for Python. This module is needed for Azure Active Directory authentication. See the details at [Azure Identity client library for Python](/python/api/overview/azure/identity-readme#environment-variables).
- ``` bash
- pip install azure-identity
- ```
+ ``` bash
+ pip install azure-identity
+ ```
-Install the Python SDK for [Azure Media Services](/python/api/overview/azure/media-services)
+3. Install the Python SDK for [Azure Media Services](/python/api/overview/azure/media-services).
-The Pypi page for the Media Services Python SDK with latest version details is located at - [azure-mgmt-media](https://pypi.org/project/azure-mgmt-media/)
+ The Pypi page for the Media Services Python SDK with latest version details is located at - [azure-mgmt-media](https://pypi.org/project/azure-mgmt-media/).
- ``` bash
- pip install azure-mgmt-media
- ```
+ ``` bash
+ pip install azure-mgmt-media
+ ```
-Install the [Azure Storage SDK for Python](https://pypi.org/project/azure-storage-blob/)
+4. Install the [Azure Storage SDK for Python](https://pypi.org/project/azure-storage-blob/).
- ``` bash
- pip install azure-storage-blob
- ```
+ ``` bash
+ pip install azure-storage-blob
+ ```
-You can optionally install ALL of the requirements for a given samples by using the "requirements.txt" file in the samples folder
+You can optionally install ALL of the requirements for a given sample by using the "requirements.txt" file in the samples folder.
``` bash pip install -r requirements.txt
You can optionally install ALL of the requirements for a given samples by using
The code below is thoroughly commented. Use the whole script or use parts of it for your own script.
-In this sample, a random number is generated for naming things so you can identify them as a group that was created together when you ran the script. The random number is optional, and can be removed when you're done testing the script.
+In this sample, a random number is generated for naming things so you can identify them as a group that was created together when you ran the script. The random number is optional, and can be removed when you're done testing the script.
We're not using the SAS URL for the input asset in this sample.
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
The following table provides a high-level features and capabilities comparisons
| TimescaleDB, orafce, plv8 | Yes | No | | PgCron, lo, pglogical | No | Yes | | pgAudit | Preview | Yes |
+| **Security** | | |
+| Azure Active Directory Support (AAD) | Yes | No |
+| Customer managed encryption key (BYOK) | Yes | No |
+| SCRAM Authentication (SHA-256) | No | Yes |
+| Secure Sockets Layer support (SSL) | Yes | Yes |
| **Other features** | | |
-| Customer managed key (BYOK) | Yes | No |
| Alerts | Yes | Yes | | Microsoft Defender for Cloud | Yes | No | | Resource health | Yes | No |
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-managed-vnet.md
Previously updated : 12/12/2021 Last updated : 01/11/2022 # Customer intent: As a Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Purview account. # Use a Managed VNet with your Azure Purview account
+> [!IMPORTANT]
+> Azure Purview Managed Vnet, VNet Integration Runtime, and managed private endpoint connections are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ > [!IMPORTANT] > Currently, Managed Virtual Network and managed private endpoints are available for Azure Purview accounts that are deployed in the following regions: > - Canada Central
purview Catalog Private Link Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-name-resolution.md
Previously updated : 10/18/2021 Last updated : 01/10/2022 # Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access. # Configure and verify DNS Name Resolution for Azure Purview private endpoints
-> [!IMPORTANT]
-> If you created a _portal_ private endpoint for your Purview account **prior to 27 September 2021 at 15:30 UTC**, you'll need to take the required actions as detailed in, [Reconfigure DNS for portal private endpoints](./catalog-private-link.md#reconfigure-dns-for-portal-private-endpoints). **These actions must be completed before November 12, 2021. Failing to do so will cause existing portal private endpoints to stop functioning**.
- ## Conceptual overview Accurate name resolution is a critical requirement when setting up private endpoints for your Azure Purview accounts.
purview Catalog Private Link Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-troubleshoot.md
Previously updated : 10/15/2021 Last updated : 01/10/2022 # Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access. # Troubleshooting private endpoint configuration for Purview accounts
-> [!IMPORTANT]
-> If you created a _portal_ private endpoint for your Purview account **prior to 27 September 2021 at 15:30 UTC**, you'll need to take the required actions as detailed in, [Reconfigure DNS for portal private endpoints](./catalog-private-link.md#reconfigure-dns-for-portal-private-endpoints). **These actions must be completed before November 12, 2021. Failing to do so will cause existing portal private endpoints to stop functioning**.
- This guide summarizes known limitations related to using private endpoints for Azure Purview and provides a list of steps and solutions for troubleshooting some of the most common relevant issues. ## Known limitations
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link.md
Last updated 01/10/2022
# Use private endpoints for your Azure Purview account
-> [!IMPORTANT]
-> If you created a _portal_ private endpoint for your Purview account **prior to 27 September 2021 at 15:30 UTC**, you'll need to take the required actions as detailed in, [Reconfigure DNS for portal private endpoints](#reconfigure-dns-for-portal-private-endpoints). **These actions must be completed before November 12, 2021. Failing to do so will cause existing portal private endpoints to stop functioning**.
-- This article describes how to configure private endpoints for Azure Purview. ## Conceptual Overview
For scenarios where _ingestion_ private endpoint is used in your Azure Purview a
|Azure Synapse Analytics | Self-Hosted IR| Service Principal| |Azure Synapse Analytics | Self-Hosted IR| SQL Authentication|
-## Reconfigure DNS for portal private endpoints
-
-If you created a _portal_ private endpoint for your Purview account **prior to 27 September 2021 at 15:30 UTC**, take the required actions as detailed in this section.
-
-### Review your current DNS settings
-
-1. From Azure portal, locate your Purview account. From left hand menu click on **Networking**, select **Private Endpoint connections**. Click on each private endpoint in the list and follow the steps below.
-
- :::image type="content" source="media/catalog-private-link/purview-pe-dns-updates-1.png" alt-text="Screenshot that shows purview private endpoint."lightbox="media/catalog-private-link/purview-pe-dns-updates-1.png":::
-
-2. If target sub-resource is _portal_, review **DNS configuration**, otherwise go back to previous step, and select the next private endpoint until you reviewed all of the private endpoints and have validated all of the private endpoints associated with the portal.
-
- :::image type="content" source="media/catalog-private-link/purview-pe-dns-updates-2.png" alt-text="Screenshot that shows portal purview private endpoint."lightbox="media/catalog-private-link/purview-pe-dns-updates-2.png":::
-
-3. In the **DNS configuration** window verify the current settings:
-
- - If there are any records in the **Custom DNS records** section, follow steps in [Remediation scenarios 1](#scenario-1) and [Remediation scenario 2](#scenario-2).
-
- :::image type="content" source="media/catalog-private-link/purview-pe-dns-updates-3.png" alt-text="Screenshot that shows portal purview private endpoint custom DNS configuration."lightbox="media/catalog-private-link/purview-pe-dns-updates-3.png":::
-
- - If there are any records in the **Configuration name** section and If the DNS zone is `privatelink.purviewstudio.azure.com`, no action is required for this private endpoint. Go back to **step 1** and review remaining portal private endpoints.
-
- :::image type="content" source="media/catalog-private-link/purview-pe-dns-updates-4.png" alt-text="Screenshot that shows portal purview private endpoint with new DNS zone."lightbox="media/catalog-private-link/purview-pe-dns-updates-4.png":::
-
- - If there are any records in the **Configuration name** section and If the DNS zone is `privatelink.purview.azure.com`, follow steps in [Remediation scenario 3](#scenario-3).
-
- :::image type="content" source="media/catalog-private-link/purview-pe-dns-updates-5.png" alt-text="Screenshot that shows portal purview private endpoint with old DNS zone."lightbox="media/catalog-private-link/purview-pe-dns-updates-5.png":::
-
-### Remediation scenarios
-
-#### Scenario 1
-
-If you **have added required DNS A records directly to your DNS or machines' host file**, **no action is required**.
-
-
-#### Scenario 2
-
-If you **have configured on-premises DNS Servers**, **DNS Forwarders or custom DNS resolution**, review your DNS settings and take proper actions:
-
-1. Review your DNS Server. if your DNS record is `web.purview.azure.com`, or if your conditional forwarder is `purview.azure.com`, **no action is required**.
-
-2. If your DNS record is `web.privatelink.purview.azure.com`, update the record to `web.privatelink.purviewstudio.azure.com`.
-
-3. If your conditional forwarder is `privatelink.purview.azure.com`, DO NOT REMOVE the zone. You are required to add a new conditional forwarder to `privatelink.purviewstudio.azure.com`.
-
-#### Scenario 3
-
-If you have configured **Azure Private DNS Zone integration for your Purview account**, follow these steps to redeploy private endpoints to reconfigure DNS settings:
-
-1. Deploy a new portal private endpoint:
-
- 1. Go to the [Azure portal](https://portal.azure.com), and then click on to your Azure Purview account, and under **Settings** select **Networking**, and then select **Private endpoint connections**.
-
- :::image type="content" source="media/catalog-private-link/purview-pe-reconfigure-portal.png" alt-text="Screenshot that shows creating a portal private endpoint."lightbox="media/catalog-private-link/purview-pe-reconfigure-portal.png":::
-
- 2. Select **+ Private endpoint** to create a new private endpoint.
-
- 3. Fill in the basic information.
-
- 4. On the **Resource** tab, for **Resource type**, select **Microsoft.Purview/account**.
-
- 5. For **Resource**, select the Azure Purview account, and for **Target sub-resource**, select **portal**.
-
- 6. On the **Configuration** tab, select the virtual network and then, select Azure Private DNS zone to create a new Azure DNS Zone.
-
- :::image type="content" source="media/catalog-private-link/purview-pe-reconfigure-portal-dns.png" alt-text="Screenshot that shows creating a portal private endpoint and DNS settings."lightbox="media/catalog-private-link/purview-pe-reconfigure-portal-dns.png":::
-
- 7. Go to the summary page, and select **Create** to create the portal private endpoint.
-
-2. Delete the previous portal private endpoint associated with the Purview account.
-
-3. Ensure that a new Azure Private DNS Zone `privatelink.purviewstudio.azure.com` is created during the deployment of the portal private endpoint, and that a corresponding A record (web) exists in the Private DNS Zone.
-
-4. Ensure you are able to successfully load Azure Purview Studio. It might take a few minutes (about 10 minutes) for the new DNS routing to take effect after reconfiguring DNS. You can wait a few minutes and try again, if it doesn't load immediately.
-
-5. If navigation fails, perform nslookup web.purview.azure.com, which should resolve to a private IP address that's associated to the portal private endpoint.
-
-6. Repeat steps 1 through 3 above for all existing portal private endpoints that you have.
-
-### Validation steps
-
-1. Ensure you are able to successfully load Azure Purview Studio. It might take a few minutes (about 10 minutes) for the new DNS routing to take effect after reconfiguring DNS. You can wait a few minutes and try again, if it doesn't load immediately.
-
-2. If navigation fails, perform nslookup `web.purview.azure.com`, which should resolve to a private IP address that's associated to the portal private endpoint.
- ## Frequently Asked Questions For FAQs related to private endpoint deployments in Azure Purview, see [FAQ about Azure Purview private endpoints](./catalog-private-link-faqs.md).
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-amazon-s3.md
Make sure that the S3 bucket [policy](https://docs.aws.amazon.com/AmazonS3/lates
Make sure that there is no [SCP policy](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) that blocks the connection to the S3 bucket.
-For example, your SCP policy might block read API calls from the [AWS scanning region](#storage-and-scanning-regions).
+For example, your SCP policy might block read API calls to the [AWS Region](#storage-and-scanning-regions) where your S3 bucket is hosted.
- Required API calls, which must be allowed by your SCP policy, include: `AssumeRole`, `GetBucketLocation`, `GetObject`, `ListBucket`, `GetBucketPublicAccessBlock`. - Your SCP policy must also allow calls to the **us-east-1** AWS Region, which is the default Region for API calls. For more information, see the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html).
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-cassandra-source.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register Cassandra, and how to authenticate and int
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register) | [Yes](#scan)| No | No | No | No| [Yes](how-to-lineage-cassandra.md)|
+| [Yes](#register) | [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](how-to-lineage-cassandra.md)|
The supported Cassandra server versions are 3.*x* or 4.*x*.
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-db2.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register DB2, and how to authenticate and interact
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| Yes|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| Yes|
The supported IBM DB2 versions are DB2 for LUW 9.7 to 11.x. DB2 for z/OS (mainframe) and iSeries (AS/400) are not supported now.
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-erwin-source.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register erwin Mart servers, and how to authenticat
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes](how-to-lineage-erwin.md)|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](how-to-lineage-erwin.md)|
The supported erwin Mart versions are 9.x to 2021.
When scanning erwin Mart source, Purview supports:
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
- > [!Note]
- > Make sure to run self-hosted integration runtime on the VM where erwin Mart instance is running.
+ > [!IMPORTANT]
+ > Make sure to install the self-hosted integration runtime and the Erwin Data Modeler software on the same machine where erwin Mart instance is running.
* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-google-bigquery-source.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register Google BigQuery projects, and how to authe
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes](how-to-lineage-google-bigquery.md)|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](how-to-lineage-google-bigquery.md)|
When scanning Google BigQuery source, Purview supports:
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-hive-metastore-source.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register Hive Metastore databases, and how to authe
|**Metadata extraction**| **Full scan** |**Incremental scan**|**Scoped scan**|**Classification**|**Access policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| Yes** |
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| Yes** |
\** Lineage is supported if the dataset is used as a source or sink in the [Azure Data Factory Copy activity](how-to-link-azure-data-factory.md).
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-looker-source.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register Looker, and how to authenticate and intera
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes](how-to-lineage-looker.md)|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](how-to-lineage-looker.md)|
The supported Looker server version is 7.2.
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-mysql.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register MySQL, and how to authenticate and interac
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| Yes|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| Yes|
The supported MySQL server versions are 5.7 to 8.x.
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-oracle-source.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register Oracle, and how to authenticate and intera
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes**](how-to-lineage-oracle.md)|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes**](how-to-lineage-oracle.md)|
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-postgresql.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register PostgreSQL, and how to authenticate and in
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| Yes|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| Yes|
The supported PostgreSQL server versions are 8.4 to 12.x.
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-salesforce.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register Salesforce, and how to authenticate and in
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| No|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No|
When scanning Salesforce source, Purview supports extracting technical metadata including:
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-sap-hana.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register SAP HANA, and how to authenticate and inte
|**Metadata extraction**| **Full scan** |**Incremental scan**|**Scoped scan**|**Classification**|**Access policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| No |
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No |
When scanning SAP HANA source, Purview supports extracting technical metadata including:
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-snowflake.md
Previously updated : 12/28/2021 Last updated : 01/11/2021
This article outlines how to register Snowflake, and how to authenticate and int
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| Yes|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| Yes|
When scanning Snowflake source, Purview supports:
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-teradata-source.md
Previously updated : 12/28/2021 Last updated : 01/11/2022
This article outlines how to register Teradata, and how to authenticate and inte
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes**](how-to-lineage-teradata.md)|
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes**](how-to-lineage-teradata.md)|
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/supported-classifications.md
Nine digits that can be in a formatted or unformatted pattern.
- an optional hyphen - a digit - ### Checksum Yes
Yes
- routing# - RTN
+-
+ ## Argentina national identity (DNI) number ### Format
Eight digits with or without periods
### Pattern Eight digits:+ - two digits - an optional period - three digits
No
- registro nacional de las personas - rnp
+-
+ ## Australia bank account number ### Format
No
- full names - iaea
+-
+ ## Australia business number+ This sensitive information type is only available for use in: - data loss prevention policies - communication compliance policies
Yes
- abn - businessno#
+-
+ ## Australia company number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
nine digits with delimiters:
- a space - three digits - ### Checksum Yes
Yes
- australian company no# - australian company number
+-
+ ## Australia driver's license number ### Format
No
- Driver's License# - Driver's Licenses#
+-
+ ## Australia medical account number ### Format
Yes
- local service - medicare
+-
## Australia passport number
No
- travel document - issuing authority
+-
## Australia tax file number
Yes
- tax file number - tfn
+-
+ ## Austria driver's license number ### Format
No
- dlno - dl number - #### Keywords_austria_eu_driver's_license_number - fuhrerschein
No
- F├╝hrerscheinnummer - F├╝hrerscheinnummern
+-
+ ## Austria identity card This sensitive information type is only available for use in: - data loss prevention policies
A 24-character combination of letters, digits, and special characters
24 characters: -- 22 letters (not case-sensitive), digits, backslashes, forward slashes, or plus signs
+- 22 letters (not case-sensitive), digits, backslashes, forward slashes, or plus signs
- two letters (not case-sensitive), digits, backslashes, forward slashes, plus signs, or equal signs
Not applicable
- national id - personalausweis republik ├╢sterreich
+-
+ ## Austria passport number ### Format
not applicable
- date of issue - date of expiry
+-
+ ## Austria social security number ### Format
Yes
- versicherungsnummer - zdravstveno zavarovanje
+-
+ ## Austria tax identification number ### Format
Yes
- tin# - tax number
+-
+ ## Austria value added tax
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- atu number - uid number
+-
+ ## Belgium driver's license number ### Format
No
- permis de conduire - numéro permis conduire
+-
## Belgium national number
Yes
- tin no - tin#
+-
+ ## Belgium passport number ### Format
not applicable
- date of issue - date of expiry
+-
+ ## Belgium value added tax number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- btw# - vat#
+-
## Brazil CPF number
Yes
- Inscrição - Receita
+-
## Brazil legal entity number (CNPJ)
Yes
- Inscrição - Empresa
+-
## Brazil national identification card (RG)
Yes
- RG (this keyword is case-sensitive) - RIC (this keyword is case-sensitive)
+-
## Bulgaria driver's license number
No
- шофьорска книжка - шофьорски книжки
+-
+ ## Bulgaria uniform civil number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- униформгражданскиid# - униформгражданскине.#
+-
## Bulgaria passport number
No
- date of issue - date of expiry
+-
+ ## Canada bank account number ### Format
No
- banking information - direct deposit
+-
## Canada driver's license number
No
- identification cards# - identification#
+-
## Canada health service number
No
- workers compensation - disability
+-
## Canada passport number
No
- PasseportNon - Passeportn ┬░
+-
## Canada personal health identification number (PHIN)
No
- Prince Edward Island - Canada
+-
## Canada social insurance number
Yes
- Birthday - Date of Birth
+-
+ ## Chile identity card number ### Format
Yes
- Chile identity #
+-
+ ## China resident identity card (PRC) number ### Format
Yes
- Θææσ«Ü
+-
+ ## Credit card number ### Format
Yes, the Luhn checksum
+-
+ ## Croatia driver's license number ### Format
No
- voza─ìke dozvole
+-
+ ## Croatia identity card number+ This entity is included in the EU National Identification Number sensitive information type. It's available as a stand-alone sensitive information type entity. ### Format
No
- tin#
+-
+ ## Croatia passport number ### Format
No
- br. Putovnice - br putovnice
+-
+ ## Croatia personal identification (OIB) number ### Format
Yes
- tin no - tin#
+-
+ ## Cyprus drivers license number ### Format
No
- άδειες οδήγησης
+-
+ ## Cyprus identity card+ This sensitive information type is only available for use in: - data loss prevention policies - communication compliance policies
not applicable
- ταυτοτητασ
+-
+ ## Cyprus passport number ### Format
No
- expires on - issued on
+-
## Cyprus tax identification number+ This sensitive information type is only available for use in: - data loss prevention policies - communication compliance policies
not applicable
- φορολογική ταυτότητα - φορολογικού κωδικού
+-
## Czech Republic Driver's License Number
No
- čísla řidičských průkazů
+-
+ ## Czech passport number ### Format
No
- date of expiry
+-
+ ## Czech National Identity Card Number ### Format
Yes
- tin# - unique identification number
+-
+
+## Date Of Birth
+
+### Format
+Any valid date
+
+### Checksum
+Not applicable
+
+### Keywords
+
+#### Keywords_date_of_birth
+
+- dob
+- birth day
+- natal day
+- any word containing the string *birth*
+
+-
## Denmark driver's license number
No
- k├╕rekort - k├╕rekortnummer
+-
## Denmark passport number
No
- date of issue - date of expiry
+-
## Denmark personal identification number
Yes
- sygesikringsnr - sygesikringsnummer
+-
+
+## Email
+
+### Format
+Any valid email address that abides by [RFC 5322](https://www.ietf.org/rfc/rfc5322.txt)
+
+### Checksum
+Not applicable
+
+### Keywords
+
+#### Keywords_email
+
+- contact
+- email
+- electronic
+- login
+- mail
+- online
+- user
+- webmail
+
+-
## Estonia driver's license number
No
#### Keywords_estonia_eu_driver's_license_number permis de conduire
+- permis de conduire
- juhilubade numbrid - juhiloa number - juhiluba
+-
## Estonia Personal Identification Code
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
Yes
- tin no - tin#
+-
## Estonia passport number
No
- date of issue - date of expiry
+-
+
+## Ethnic groups
+
+### Format
+
+Most common ethnic groups. For a reference list see this [article](https://en.wikipedia.org/wiki/List_of_contemporary_ethnic_groups).
+
+### Checksum
+Not applicable
+
+### Keywords
+
+#### Keywords_ethnic_group
+
+- ethnic
+- ethnic groups
+- ethnicity
+- ethnicities
+- nationality
+- race
+
+-
## EU debit card number
Yes
- vto - válido hasta
+-
## EU driver's license number
These entities are in the EU Driver's License Number and are sensitive informati
- [Sweden](#sweden-drivers-license-number) - [U.K.](#uk-drivers-license-number)
+-
## EU passport number
These entities are in the EU passport number and are sensitive information types
- [Sweden](#sweden-passport-number) - [U.K.](#us--uk-passport-number)
+-
## Finland driver's license number
No
- ajokortin numerot
+-
+ ## Finland european health insurance number
-This sensitive information type is only available for use in:
-- data loss prevention policies-- communication compliance policies-- information governance-- records management-- Microsoft Defender for Cloud Apps ### Format
No
- terveyskortti
+-
+ ## Finland national ID ### Format
Yes
- verotunnus
+-
+ ## Finland passport number This entity is available in the EU Passport Number sensitive information type and is available as a stand-alone sensitive information type entity.
No
- date of issue - date of expiry
+-
+ ## France driver's license number This entity is available in the EU Driver's License Number sensitive information type and is available as a stand-alone sensitive information type entity.
No
- numéros de licence
+-
+ ## France health insurance number This sensitive information type is only available for use in: - data loss prevention policies
No
- carte d'assuré social
+-
+ ## France national id card (CNI) ### Format
No
- numéro de carte vitale
+-
+ ## France passport number This entity is available in the EU Passport Number sensitive information type. It's also available as a stand-alone sensitive information type entity.
No
- date of expiry
+-
+ ## France social security number (INSEE) ### Format
Yes
- social insurance number
+-
+ ## France tax identification number ### Format
Yes
- tin#
+-
+ ## France value added tax number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- numéro d'identification siren
+-
+ ## Germany driver's license number This sensitive information type entity is included in the EU Driver's License Number sensitive information type. It's also available as a stand-alone sensitive information type entity.
Yes
- dlno
+-
+ ## Germany identity card number ### Format
No
- persönliche-id-nummer
+-
+ ## Germany passport number This entity is included in the EU Passport Number sensitive information type and is available as a stand-alone sensitive information type entity.
Yes
- passport numbers
+-
+ ## Germany tax identification number ### Format
Yes
- zinnnummer
+-
+ ## Germany value added tax number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- mehrwertsteuer nummer
+-
+ ## Greece driver's license number This entity is included in the EU Driver's License Number sensitive information type. It is also available as a stand-alone sensitive information type entity.
No
- Δίπλωμα οδήγησης
+-
+ ## Greece national ID card ### Format
No
- ταυτότητας
+-
+ ## Greece passport number ### Format
No
- αριθμός διαβατηριο
+-
+ ## Greece Social Security Number (AMKA) This sensitive information type is only available for use in: - data loss prevention policies
Yes
- Αριθμού Μητρώου Κοινωνικής Ασφάλισης
+-
+ ## Greece tax identification number This sensitive information type is only available for use in: - data loss prevention policies
Not applicable
- φορολογικού μητρώου νο
+-
+ ## Hong Kong identity card (HKID) number ### Format
Yes
- 香港特別行政區非永久性居民身分証
+-
+ ## Hungary driver's license number ### Format
No
- vezetői engedélyek
+-
+ ## Hungary personal identification number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- személyi igazolvány
+-
+ ## Hungary passport number ### Format
No
- date of expiry
+-
+ ## Hungary social security number (TAJ) ### Format
Yes
- magyar áfa szám
+-
+ ## Hungary tax identification number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- vat number
+-
+ ## Hungary value added tax number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- hozzáadottérték adó - áfa szám
+-
+ ## India permanent account number (PAN) ### Format
No
- Permanent Account Number - PAN
+-
+ ## India unique identification (Aadhaar) number ### Format
Yes
- आधार - uidai
+-
+ ## Indonesia identity card (KTP) number ### Format
No
- Kartu Tanda Penduduk - Nomor Induk Kependudukan
+-
+ ## International banking account number (IBAN) ### Format
Yes
None
+-
+ ## IP address ### Format
No
- IP-כתובת ה
+-
+ ## IP Address v4 ### Format
No
- IP-כתובת ה
+-
+ ## IP Address v6 ### Format
No
- IP-כתובת ה
+-
+ ## Ireland driver's license number ### Format
No
- ceadúnas tiomána - ceadúnais tiomána
+-
+ ## Ireland passport number ### Format
No
- date of expiry
+-
+ ## Ireland personal public service (PPS) number ### Format
Yes
- tin#
+-
+ ## Israel bank account number ### Format
No
- Account Number - מספר חשבון בנק
+-
+ ## Israel national identification number ### Format
Yes
- unique id
+-
+ ## Italy driver's license number This type entity is included in the EU Driver's License Number sensitive information type. It is also available as a stand-alone sensitive information type entity.
No
- patenti guida
+-
+ ## Italy fiscal code This sensitive information type is only available for use in: - data loss prevention policies
Yes
- tin#
+-
+ ## Italy passport number ### Format
not applicable
- date of expiry
+-
+ ## Italy value added tax number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- iva#
+-
+ ## Japan bank account number ### Format
No
- 支店コード - 店番号
+-
+ ## Japan driver's license number ### Format
No
- 免許#
+-
+ ## Japan My Number - Corporate This sensitive information type is only available for use in: - data loss prevention policies
Yes
- 指定通知書
+-
+ ## Japan My Number - Personal This sensitive information type is only available for use in: - data loss prevention policies
Yes
- 通知カード
+-
+ ## Japan passport number ### Format
No
- 旅券ナンバー
+-
+ ## Japan residence card number ### Format
No
- 在留カード - 在留番号
+-
+ ## Japan resident registration number ### Format
No
- 外国人登録証
+-
+ ## Japan social insurance number (SIN) ### Format
No
- 厚生年金被保険者整理番号
+-
+ ## Latvia driver's license number ### Format
No
- autovad─½t─üja apliec─½bas - vad─½t─üja apliec─½ba
+-
+ ## Latvia Personal Code ### Format
Yes
- tin# - voterΓÇÖs number
+-
+ ## Latvia passport number ### Format
No
- date of expiry
+-
+ ## Lithuania driver's license number ### Format
No
- vairuotojo pa┼╛ym─ùjimo numeris - vairuotojo pa┼╛ym─ùjimo numeriai
+-
+ ## Lithuania Personal Code This sensitive information type is only available for use in: - data loss prevention policies
Yes
- unique identity number - uniqueidentityno#
+-
+ ## Lithuania passport number ### Format
not applicable
- date of issue - date of expiry
+-
+ ## Location ### Format
Not applicable
- location - position
+-
+ ## Luxemburg driver's license number ### Format
No
- fahrerlaubnis - Führerschäin
+-
+ ## Luxemburg national identification number natural persons This sensitive information type is only available for use in: - data loss prevention policies
yes
- unique identity - uniqueidkey#
+-
+ ## Luxemburg national identification number non-natural persons ### Format
Yes
- zinn - zinnzahl
+-
+ ## Luxemburg passport number ### Format
No
- date of expiry
+-
+ ## Malaysia identification card number ### Format
No
- nric - personal identification card
+-
+ ## Malta driver's license number ### Format
No
- li─ïenzji tas-sewwieq
+-
+ ## Malta identity card number This sensitive information type is only available for use in: - data loss prevention policies
Not applicable
- uniqueidentityno#
+-
+ ## Malta passport number ### Format
No
- date of expiry
+-
+ ## Malta tax identification number ### Format
Not applicable
- unique identity number - uniqueidentityno#
+-
+ ## Netherlands citizen's service (BSN) number ### Format
Yes
- unique identity number - uniqueidentityno#
+-
+ ## Netherlands driver's license number ### Format
No
- rijbewijsnummers
+-
+ ## Netherlands passport number ### Format
not applicable
- paspoortnummer - paspoort nr
+-
+ ## Netherlands tax identification number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- tin#
+-
+ ## Netherlands value added tax number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- btw-nummer
+-
+ ## New Zealand bank account number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- bank_acct_nbr
+-
+ ## New Zealand driver's license number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- new zealand automobile association
+-
+ ## New Zealand inland revenue number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- inland revenue number
+-
+ ## New Zealand ministry of health number ### Format
Yes
- National Health Index Id - National Health Index #
+-
+ ## New Zealand social welfare number This sensitive information type is only available for use in:
Yes
- swn#
+-
+ ## Norway identification number ### Format
Yes
- F├╕dselsnummer
+-
+ ## Philippines unified multi-purpose identification number ### Format
No
- Identity Card - Pinag-isang Multi-Layunin ID
+-
+ ## Poland driver's license number ### Format
No
- prawo jazdy - prawa jazdy
+-
+ ## Poland identity card ### Format
Yes
- dow. os.
+-
+ ## Poland national ID (PESEL) ### Format
Yes
- tożsamości narodowej
+-
+ ## Poland passport number This sensitive information type entity is included in the EU Passport Number sensitive information type. It's also available as a stand-alone sensitive information type entity.
Yes
- date of expiry
+-
+ ## Poland REGON number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- numeruregon#
+-
+ ## Poland tax identification number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- vatno#
+-
+ ## Portugal citizen card number ### Format
No
- portugal bi number
+-
+ ## Portugal driver's license number ### Format
No
- Licença condução Portugal - carta de condução
+-
+ ## Portugal passport number ### Format
No
- date of expiry
+-
+ ## Portugal tax identification number ### Format
Yes
- tin#
+-
+ ## Romania driver's license number ### Format
No
- permisele conducere - permis conducere
+-
+ ## Romania personal numeric code (CNP) This sensitive information type is only available for use in: - data loss prevention policies
Yes
- uniqueidentityno# - uniqueidentityno
+-
+ ## Romania passport number ### Format
No
- date of expiry
+-
+ ## Russia passport number domestic This sensitive information type is only available for use in: - data loss prevention policies
No
- номерпаспорта#
+-
+ ## Russia passport number international This sensitive information type is only available for use in: - data loss prevention policies
No
- номерпаспорта#
+-
+ ## Saudi Arabia National ID ### Format
No
- الوطنية الهوية بطاقة رقم
+-
+ ## Singapore national registration identity card (NRIC) number ### Format
Yes
- 身份证 - 身份證
+-
+ ## Slovakia driver's license number ### Format
No
- vodičského preukazu - vodičských preukazov
+-
+ ## Slovakia personal number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- tin no - tin#
+-
+ ## Slovakia passport number ### Format
No
- date of expiry
+-
+ ## Slovenia driver's license number ### Format
No
- številka vozniškega dovoljenja - številke vozniških dovoljenj
+-
+ ## Slovenia Unique Master Citizen Number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- uniqueidentityno # - uniqueidentityno#
+-
+ ## Slovenia passport number ### Format
No
- date of expiry
+-
+ ## Slovenia tax identification number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- tin#
+-
+ ## South Africa identification number ### Format
Yes
- ID - Identification
+-
+ ## South Korea resident registration number ### Format
Yes
- RRN - 주민등록번호
+-
+ ## Spain driver's license number ### Format
Yes
- licencia de manejo - licencia manejo
+-
+ ## Spain DNI This sensitive information type is only available for use in: - data loss prevention policies
Yes
- unique identity number - uniqueid#
+-
+ ## Spain passport number ### Format
Not applicable
- date of expiry
+-
+ ## Spain social security number (SSN)
Yes
- social security number - n├║mero de la seguridad social
+-
+ ## Spain tax identification number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- tin no - tin#
+-
+ ## Sweden driver's license number ### Format
No
- דריווערס דערלויבעניש - körkortsnummer
+-
+ ## Sweden national ID ### Format
Yes
- personnummer - skatteidentifikationsnummer
+-
+ ## Sweden passport number ### Format
No
- date of expiry
+-
+ ## Sweden tax identification number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- tin#
+-
+ ## SWIFT code ### Format
No
- 金融機関コード - 銀行コード
+-
+ ## Switzerland SSN AHV number This sensitive information type is only available for use in: - data loss prevention policies
Yes
- numéro de sécurité sociale
+-
+ ## Taiwan national identification number ### Format
Yes
- 簽名或蓋章 - 簽章
+-
+ ## Taiwan passport number ### Format
No
- 中華民國護照 - Zhōnghuá Mínguó hùzhào
+-
+ ## Taiwan-resident certificate (ARC/TARC) number ### Format
No
- 外僑居留證 - 台灣地區居留證
+-
+ ## U.K. driver's license number ### Format
Yes
- dl number
+-
+ ## U.K. electoral roll number ### Format
No
- electoral roll
+-
+ ## U.K. national health service number ### Format
Yes
- Date of Birth - Birth Date
+-
+ ## U.K. national insurance number (NINO) This sensitive information type entity is included in the EU National Identification Number sensitive information type. It's also available as a stand-alone sensitive information type entity.
No
- nationalinsurancenumber
+-
+ ## U.K. Unique Taxpayer Reference Number This sensitive information type is only available for use in: - data loss prevention policies
No
- tin no - tin#
+-
+ ## U.S. bank account number ### Format
No
- Debit Acct No. - Debit Account No.
+-
+ ## U.S. driver's license number ### Format
No
- state abbreviation (for example, "NY") - state name (for example, "New York")
+-
+ ## U.S. individual taxpayer identification number (ITIN) ### Format
No
- itins - taxid - individual taxpayer
+-
+
+## U.S. phone number
+
+### Pattern
+- 10 digit number, for e.g., +1 nxx-nxx-xxxx
+- Optional area code: +1
+- n can be any digit between 2-9
+- x can be any digit between 0-9
+- Optional paranthesis around the area code
+- Optional space or - between area code, exchange code, and the last four digits
+- Optional 4 digit extension
+
+### Checksum
+Not applicable
+
+### Keywords
+#### Keywords_us_phone_number
+- cell
+- cellphone
+- contact
+- landline
+- mobile
+- mob
+- mob#
+- ph#
+- phone
+- telephone
+- tel#
+
+-
## U.S. social security number (SSN)
No
- SS# - SSID
+-
+
+## U.S. states
+
+### Format
+Includes all 50 U.S. state names and the two digit short codes.
+
+### Checksum
+Not applicable
+
+### Keywords
+
+#### Keywords_us_states
+- State
+
+-
+
+## U.S. zipcode
+
+### Format
+Five digit U.S. Zip code and an optional 4 digit code separated by a hyphen (-).
+
+### Checksum
+Not applicable
+
+### Keywords
+
+#### Keywords_us_zip_code
+- zip
+- zipcode
+- postal
+- postalcode
+
+-
+ ## U.S. / U.K. passport number ### Format
No
- uk passport
+-
+ ## Ukraine passport domestic This sensitive information type is only available for use in: - data loss prevention policies
No
- персональний
+-
+ ## Ukraine passport international This sensitive information type is only available for use in: - data loss prevention policies
search Search How To Load Search Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-how-to-load-search-index.md
+
+ Title: Load a search index
+
+description: Import and refresh data in a search index using the portal, REST APIs, or an Azure SDK.
++++++ Last updated : 01/11/2022++
+# Load data into a search index in Azure Cognitive Search
+
+This article explains how to import, refresh, and manage content in a predefined search index. In Azure Cognitive Search, a [search index is created first](search-how-to-create-search-index.md), with data import following as a second step. The exception is Import Data wizard, which creates and loads an index in one workflow.
+
+A search service imports and indexes text in JSON, used in full text search or knowledge mining scenarios. Text content is obtainable from alphanumeric fields in the external data source, metadata that's useful in search scenarios, or enriched content created by a [skillset](cognitive-search-working-with-skillsets.md) (skills can extract or infer textual descriptions from images and unstructured content).
+
+Once data is indexed, the physical data structures of the index are locked in. For guidance on what can and cannot be changed, see [Drop and rebuild an index](search-howto-reindex.md).
+
+Indexing is not a background process. A search service will balance indexing and query workloads, but if [query latency is too high](search-performance-analysis.md#impact-of-indexing-on-queries), you can either [add capacity](search-capacity-planning.md#add-or-reduce-replicas-and-partitions) or identify periods of low query activity for loading an index.
+
+## Load documents
+
+A search service accepts JSON documents that conform to the index schema.
+
+You can prepare these documents yourself, but if content resides in a [supported data source](search-indexer-overview.md#supported-data-sources), running an [indexer](search-indexer-overview.md) or the Import data wizard can automate document retrieval, JSON serialization, and indexing.
+
+### [**Azure portal**](#tab/import-portal)
+
+Using Azure portal, the sole means for loading an index is the [Import Data wizard](search-import-data-portal.md). The wizard creates objects. If you want to load an existing index, you will need to use an alternative approach.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
+
+1. [Find your search service](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, click **Import data** on the command bar to create and populate a search index.
+
+ :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
+
+1. Follow this link to review the workflow: [Quickstart: Create an Azure Cognitive Search index in the Azure portal](search-get-started-portal.md).
+
+### [**REST**](#tab/import-rest)
+
+[Add, Update or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents) is the means by which you can import data into a search index. The @search.action parameter determines whether documents are added in full, or partially in terms of new or replacement values for specific fields.
+
+[**REST Quickstart: Create, load, and query an index**](search-get-started-rest.md) explains the steps. The following example is a modified version of the example. It's been trimmed for brevity and the first HotelId value has been altered to avoid overwriting an existing document.
+
+1. Formulate a POST call specifying the index name, the "docs/index" endpoint, and a request body that includes the @search.action parameter.
+
+ ```http
+ POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/index?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+ {
+ "value": [
+ {
+ "@search.action": "upload",
+ "HotelId": "1111",
+ "HotelName": "Secret Point Motel",
+ "Description": "The hotel is ideally located on the main commercial artery of the city in the heart of New York. A few minutes away is Time's Square and the historic centre of the city, as well as other places of interest that make New York one of America's most attractive and cosmopolitan cities.",
+ "Category": "Boutique",
+ "Tags": [ "pool", "air conditioning", "concierge" ]
+ },
+ {
+ "@search.action": "mergeOrUpload",
+ "HotelId": "2",
+ "HotelName": "Twin Dome Motel",
+ "Description": "This is description is replacing the original one for this hotel. New and changed values overwrite the previous ones. In a comma-delimited list like Tags, be sure to provide the full list because there is no merging of values within the field itself.",
+ "Category": "Boutique",
+ "Tags": [ "pool", "free wifi", "concierge", "my first new tag", "my second new tag" ]
+ }
+ ]
+ }
+ ```
+
+1. [Look up the documents](/rest/api/searchservice/lookup-document) you just added as a validation step:
+
+ ```http
+ GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2020-06-30
+ ```
+
+When the document key or ID is new, **null** becomes the value for any field that is unspecified in the document. For actions on an existing document, updated values replace the previous values. Any fields that were not specified in a "merge" or "mergeUpload" are left intact in the search index.
+
+### [**.NET SDK (C#)**](#tab/importcsharp)
+
+Azure Cognitive Search supports the following APIs for simple and bulk document uploads into an index:
+++ [IndexDocumentsAction](/dotnet/api/azure.search.documents.models.indexdocumentsaction)++ [IndexDocumentsBatch](/dotnet/api/azure.search.documents.models.indexdocumentsbatch)+
+There are several samples that illustrate indexing in context of simple and large-scale indexing:
+++ [**"Load an index"**](search-howto-dotnet-sdk.md#load-an-index) explains basic steps.+++ [**Azure.Search.Documents Samples - Indexing Documents**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md) from the Azure SDK team adds [SearchIndexingBufferedSender](/dotnet/api/azure.search.documents.searchindexingbufferedsender-1).+++ [**Tutorial: Index any data**](tutorial-optimize-indexing-push-api.md) couples batch indexing with testing strategies for determining an optimum size.+++
+## Delete orphan documents
+
+Azure Cognitive Search supports document-level operations so that you can look up, update, and delete a specific document in isolation. The following example shows how to delete a document. In a search service, documents are unrelated so deleting one will have no impact on the rest of the index.
+
+1. Identify which field is the document key. In the portal, you can view the fields of each index. Document keys are string fields and are denoted with a key icon to make them easier to spot.
+
+1. Check the values of the document key field: `search=*&$select=HotelId`. A simple string is straightforward, but if the index uses a base-64 encoded field, or if search documents were generated from a `parsingMode` setting, you might be working with values that you aren't familiar with.
+
+1. [Look up the document](/rest/api/searchservice/lookup-document) to verify the value of the document ID and to review its content before deleting it. Specify the key or document ID in the request. The following examples illustrate a simple string for the [Hotels sample index](search-get-started-portal.md) and a base-64 encoded string for the metadata_storage_path key of the [cog-search-demo index](cognitive-search-tutorial-blob.md).
+
+ ```http
+ GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2020-06-30
+ ```
+
+ ```http
+ GET https://[service name].search.windows.net/indexes/cog-search-demo/docs/aHR0cHM6Ly9oZWlkaWJsb2JzdG9yYWdlMi5ibG9iLmNvcmUud2luZG93cy5uZXQvY29nLXNlYXJjaC1kZW1vL2d1dGhyaWUuanBn0?api-version=2020-06-30
+ ```
+
+1. [Delete the document](/rest/api/searchservice/addupdate-or-delete-documents) to remove it from the search index.
+
+ ```http
+ POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/index?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: [admin key]
+ {
+ "value": [
+ {
+ "@search.action": "delete",
+ "id": "1111"
+ }
+ ]
+ }
+ ```
+
+## See also
+++ [Search indexes overview](search-what-is-an-index.md)++ [Data import overview](search-what-is-data-import.md)++ [Import data wizard overview](search-import-data-portal.md)++ [Indexers overview](search-indexer-overview.md)
search Search Howto Reindex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-reindex.md
-+ Last updated 01/10/2022
During development, the index schema changes frequently. You can plan for it by
For applications already in production, we recommend creating a new index that runs side by side an existing index to avoid query downtime. Your application code provides redirection to the new index.
-Indexing does not run in the background and the service will balance the additional indexing against ongoing queries. During indexing, you can [monitor query requests](search-monitor-queries.md) in the portal to ensure queries are completing in a timely manner.
- 1. Determine whether a rebuild is required. If you are just adding fields, or changing some part of the index that is unrelated to fields, you might be able to simply [update the definition](/rest/api/searchservice/update-index) without deleting, recreating, and fully reloading it. 1. [Get an index definition](/rest/api/searchservice/get-index) in case you need it for future reference.
When you create the index, physical storage is allocated for each field in the i
When you load the index, each field's inverted index is populated with all of the unique, tokenized words from each document, with a map to corresponding document IDs. For example, when indexing a hotels data set, an inverted index created for a City field might contain terms for Seattle, Portland, and so forth. Documents that include Seattle or Portland in the City field would have their document ID listed alongside the term. On any [Add, Update or Delete](/rest/api/searchservice/addupdate-or-delete-documents) operation, the terms and document ID list are updated accordingly.
-> [!NOTE]
-> If you have stringent SLA requirements, you might consider provisioning a new service specifically for this work, with development and indexing occurring in full isolation from a production index. A separate service runs on its own hardware, eliminating any possibility of resource contention. When development is complete, you would either leave the new index in place, redirecting queries to the new endpoint and index, or you would run finished code to publish a revised index on your original Azure Cognitive Search service. There is currently no mechanism for moving a ready-to-use index to another service.
+## Balancing workloads
+
+Indexing does not run in the background, but the search service will balance any indexing jobs against ongoing queries. During indexing, you can [monitor query requests](search-monitor-queries.md) in the portal to ensure queries are completing in a timely manner.
+
+If indexing workloads introduce unacceptable levels of query latency, conduct [performance analysis](search-performance-analysis.md) and review these [performance tips](search-performance-tips.md) for potential mitigation.
## Check for updates
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-run-reset-indexers.md
When testing this API for the first time, the following APIs will help you valid
+ The document keys provided in the request are values from the search index, which can be different from the corresponding fields in the data source. If you are unsure of the key value, [send a query](search-query-create.md) to return the value.You can use `select` to return just the document key field.
- + For blobs that are parsed into multiple search documents (for example, if you used [jsonLines or jsonArrays](search-howto-index-json-blobs.md), or [delimitedText](search-howto-index-csv-blobs.md)) as a parsing mode, the document key is generated by the indexer and might be unknown to you. In this situation, a query for the document key will be instrumental in providing the correct value.
+ + For blobs that are parsed into multiple search documents (where parsingMode is set to [jsonLines or jsonArrays](search-howto-index-json-blobs.md), or [delimitedText](search-howto-index-csv-blobs.md)), the document key is generated by the indexer and might be unknown to you. In this scenario, a query for the document key to return the correct value.
1. Call [Run Indexer](/rest/api/searchservice/run-indexer) (any API version) to process the documents you specified. Only those specific documents are indexed.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| -- | -- | - | | **Incidents** | | |- [Automation rules](../../sentinel/automate-incident-handling-with-automation-rules.md) | Public Preview | Public Preview |
-| - [Cross-tenant/Cross-workspace incidents view](../../sentinel/multiple-workspace-view.md) |Public Preview | Public Preview |
+| - [Cross-tenant/Cross-workspace incidents view](../../sentinel/multiple-workspace-view.md) |GA | GA |
| - [Entity insights](../../sentinel/enable-entity-behavior-analytics.md) | GA | Public Preview | |- [SOC incident audit metrics](../../sentinel/manage-soc-with-incident-metrics.md) | GA | GA | | - [Incident advanced search](/azure/sentinel/investigate-cases#search-for-incidents) |GA |GA |
sentinel Authentication Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/authentication-normalization-schema.md
Microsoft Sentinel provides the following built-in, product-specific authenticat
- **AWS sign-ins**, collected using the AWS CloudTrail connector. - **Okta authentication**, collected using the Okta connector.
-To use the source-agnostic parser, which unifies all of listed parsers, ensuring that you analyze data across all the configured sources, use **imAuthentication** as the table name in your query.
+To use the unifying parser, which unifies all of listed parsers, ensuring that you analyze data across all the configured sources, use **imAuthentication** as the table name in your query.
-Deploy the [source-agnostic and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelAuth).
+Deploy the [unifying and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelAuth).
An **Actor**, running an *Acting Application* (**ActingApp**) on a *Source Devic
For more information, see: -- [Normalization in Microsoft Sentinel](normalization.md)-- [Microsoft Sentinel DNS normalization schema reference](dns-normalization-schema.md)-- [Microsoft Sentinel file event normalization schema reference (Public preview)](file-event-normalization-schema.md)-- [Microsoft Sentinel network normalization schema reference](./network-normalization-schema.md)-- [Microsoft Sentinel process event normalization schema reference (Public preview)](process-events-normalization-schema.md)
+- Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Dhcp Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/dhcp-normalization-schema.md
The fields below are specific to DHCP events, but many are similar to fields in
For more information, see: - Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)-- [Advanced SIEM Information Model schemas](normalization-about-schemas.md)-- [Advanced SIEM Information Model parsers](normalization-about-parsers.md)-- [Advanced SIEM Information Model content](normalization-content.md)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/dns-normalization-schema.md
The most important activity reported by DNS servers is a DNS query, for which th
The most important fields in a DNS event are: - [DnsQuery](#query), which reports the domain name for which the query was issued.+ - The [SrcIpAddr](#srcipaddr) (aliased to [IpAddr](#ipaddr)), which represents the IP address from which the request was generated. + - [EventResultDetails](#eventresultdetails), which reports as to whether the request was successful and if not, why.-- When available, [DnsResponseName](#responsename), which holds the answer provided by the server to the query. ASIM does not require parsing the response, and its format varies between sources. To use this field in source-agnostic content, search the content using the `has` or `contains` operators.+
+- When available, [DnsResponseName](#responsename), which holds the answer provided by the server to the query. ASIM does not require parsing the response, and its format varies between sources.
+
+ To use this field in source-agnostic content, search the content using the `has` or `contains` operators.
DNS events collected on client device may also include [User](#user) and [Process](#process) information.
imDNS | where SrcIpAddr != "127.0.0.1" and EventSubType == "response"
## Parsers
-### Source-agnostic parsers
+For more information about ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md) and [Use ASIM parsers](normalization-about-parsers.md).
-To use the source-agnostic parsers that unify all of the out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the following KQL functions as the table name in your query:
+### Unifying parsers
-| Name | Description | Usage instructions |
-| | | |
-| **imDNS** | Aggregative parser that uses *union* to include normalized events from all DNS sources. |- Update this parser if you want to add or remove sources from source-agnostic analytics. <br><br>- Use this function in your source-agnostic queries.|
-| **ASimDNS** | Similar to the `imDns` function, but without parameter support, and therefore does not force the **Logs** page time picker to use the `custom` value. |- Update this parser if you want to add or remove sources from source-agnostic analytics.<br><br>- Use this function in your source-agnostic queries if you don't plan to use parameters.|
-| **vimDNS\<vendor\>\<product\>** | Source-specific parsers implement normalization for a specific source, such as *vimDNSWindowsOMS*. |- Add a source-specific parser for a source when there is no out-of-the-box normalizing parser. Update the `im` aggregative parser to include reference to your new parser. <br><br>- Update a source-specific parser to resolve parsing and normalization issues.<br><br>- Use a source-specific parser for source-specific analytics.|
-| **ASimDNS\<vendor\>\<product\>** | Source-specific parsers implement normalization for a specific source. Unlike the `vim*` functions, the `ASimDNS*` functions do not support parameters. |- Add a source-specific parser for a source when there is no out-of-the-box normalizing parser. Update the aggregative `ASim` parser to include reference to your new parser.<br><br>- Update a source-specific parser to resolve parsing and normalization issues.<br><br>- Use an `ASim` source-specific parser for interactive queries when not using parameters.|
-| | | |
+To use parsers that unify all ASIM out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the `_Im_Dns` filtering parser or the `_ASim_Dns` parameter-less parser.
-The parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/azsentinelDNS).
+Deploy unifying parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/azsentinelDNS). For more information, see [built-in ASIM parsers and workspace-deployed parsers](normalization-parsers-overview.md#built-in-asim-parsers-and-workspace-deployed-parsers).
### Out-of-the-box, source-specific parsers Microsoft Sentinel provides the following out-of-the-box, product-specific DNS parsers:
-| **Name** | **Description** |
-| | |
-|**Microsoft DNS Server** | **Collected using the DNS connector and the Log Analytics Agent**: <br> - `ASimDnsMicrosoftOMS` (regular) <br>- `vimDnsMicrosoftOMS` (parametrized) <br><br> **Collected using NXlog**: <br> - `ASimDnsMicrosoftNXlog` (regular)<br>- `vimDnsMicrosoftNXlog` (parameterized) |
-| **Azure Firewall** |- `ASimDnsAzureFirewall` (regular)<br>- `vimDnsAzureFirewall` (parameterized) |
-|**Sysmon for Windows** (event 22) | **Collected using the Log Analytics Agent or the Azure Monitor Agent**, supporting both the `Event` and `WindowsEvent` tables: <br>- `ASimDnsMicrosoftSysmon` (regular)<br>- `vimDnsMicrosoftSysmon` (parametrized) |
-|**Cisco Umbrella** | - `ASimDnsCiscoUmbrella` (regular)<br>- `vimDnsCiscoUmbrella` (parametrized) |
-|**Infoblox NIOS** |- `ASimDnsInfobloxNIOS` (regular)<br>- `vimDnsInfobloxNIOS` (parametrized) |
-| **GCP DNS** |- `ASimDnsGcp` (regular)<br>- `vimDnsGcp` (parametrized) |
-| **Corelight Zeek DNS events** | - `ASimDnsCorelightZeek` (regular)<br>- `vimDnsCorelightZeek` (parametrized) |
-| **zScaler ZIA** |- `AsimDnszScalerZIA` (regular)<br>- `vimDnszScalerZIA` (parametrized) |
-| | |
+| **Source** | **Built-in parsers** | **Workspace deployed parsers** |
+| | | |
+|**Microsoft DNS Server**<br>Collected using the DNS connector<br> and the Log Analytics Agent | `_ASim_DnsMicrosoftOMS` (regular) <br> `_Im_DnsMicrosoftOMS` (filtering) <br><br> | `ASimDnsMicrosoftOMS` (regular) <br>`vimDnsMicrosoftOMS` (filtering) <br><br> |
+| **Microsoft DNS Server**<br>Collected using NXlog| `_ASim_DnsMicrosoftNXlog` (regular)<br>`_Im_DnsMicrosoftNXlog` (filtering)| `ASimDnsMicrosoftNXlog` (regular)<br> `vimDnsMicrosoftNXlog` (filtering)|
+| **Azure Firewall** | `_ASim_DnsAzureFirewall` (regular)<br> `_Im_DnsAzureFirewall` (filtering) | `ASimDnsAzureFirewall` (regular)<br>`vimDnsAzureFirewall` (filtering) |
+| **Sysmon for Windows** (event 22)<br> Collected using the Log Analytics Agent<br> or the Azure Monitor Agent,<br>supporting both the<br> `Event` and `WindowsEvent` tables | `_ASim_DnsMicrosoftSysmon` (regular)<br> `_Im_DnsMicrosoftSysmon` (filtering) | `ASimDnsMicrosoftSysmon` (regular)<br> `vimDnsMicrosoftSysmon` (filtering) |
+| **Cisco Umbrella** | `_ASim_DnsCiscoUmbrella` (regular)<br> `_Im_DnsCiscoUmbrella` (filtering) | `ASimDnsCiscoUmbrella` (regular)<br> `vimDnsCiscoUmbrella` (filtering) |
+| **Infoblox NIOS** | `_ASim_DnsInfobloxNIOS` (regular)<br> `_Im_DnsInfobloxNIOS` (filtering) | `ASimDnsInfobloxNIOS` (regular)<br> `vimDnsInfobloxNIOS` (filtering) |
+| **GCP DNS** | `_ASim_DnsGcp` (regular)<br> `_Im_DnsGcp` (filtering) | `ASimDnsGcp` (regular)<br> `vimDnsGcp` (filtering) |
+| **Corelight Zeek DNS events** | `_ASim_DnsCorelightZeek` (regular)<br> `_Im_DnsCorelightZeek` (filtering) | `ASimDnsCorelightZeek` (regular)<br> `vimDnsCorelightZeek` (filtering)
+| **zScaler ZIA** |`_ASim_DnsZscalerZIA` (regular)<br> `_Im_DnsZdcalerZIA` (filtering) | `AsimDnsZscalerZIA` (regular)<br> `vimDnsSzcalerZIA` (filtering) |
+| | | |
-The parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/azsentinelDNS).
+These parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/azsentinelDNS).
### Add your own normalized parsers
imDns (domain_has_any = torProxies)
## Normalized content
-For a full list of analytics rules that use normalized DNS events, see the [DNS query security content](normalization-content.md#dns-query-security-content) section.
+For a full list of analytics rules that use normalized DNS events, see [DNS query security content](normalization-content.md#dns-query-security-content).
## Schema details
The fields below are specific to DNS events, although many are similar to fields
| <a name="srcdvcid"></a>**SrcDvcId** | Optional | String | The ID of the source device as reported in the record.<br><br>For example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` | | **SrcDvcIdType** | Optional | Enumerated | The type of [SrcDvcId](#srcdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the list above, and store the others in the **SrcDvcAzureResourceId** and **SrcDvcMDEid**, respectively.<br><br>**Note**: This field is required if [SrcDvcId](#srcdvcid) is used. | | **SrcDeviceType** | Optional | Enumerated | The type of the source device. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other` |
-| <a name="srcuserid"></a>**SrcUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the source user. Format and supported types include:<br>- **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br>- **UID** (Linux): `4578`<br>- **AADID** (Azure Active Directory): `9267d02c-5f76-40a9-a9eb-b686f3ca47aa`<br>- **OktaId**: `00urjk4znu3BcncfY0h7`<br>- **AWSId**: `72643944673`<br><br>Store the ID type in the [SrcUserIdType](#srcuseridtype) field. <br><br>If other IDs are available, we recommend that you normalize the field names to **SrcUserSid**, **SrcUserUid**, **SrcUserAadId**, **SrcUserOktaId** and **UserAwsId**, respectively. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: S-1-12 |
+| <a name="srcuserid"></a>**SrcUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the source user. Format and supported types include:<br>- **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br>- **UID** (Linux): `4578`<br>- **AADID** (Azure Active Directory): `9267d02c-5f76-40a9-a9eb-b686f3ca47aa`<br>- **OktaId**: `00urjk4znu3BcncfY0h7`<br>- **AWSId**: `72643944673`<br><br>Store the ID type in the [SrcUserIdType](#srcuseridtype) field. <br><br>If other IDs are available, we recommend that you normalize the field names to **SrcUserSid**, **SrcUserUid**, **SrcUserAadId**, **SrcUserOktaId** and **UserAwsId**, respectively. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `S-1-12` |
| <a name="srcuseridtype"></a>**SrcUserIdType** | Optional | Enumerated | The type of the ID stored in the [SrcUserId](#srcuserid) field. Supported values include: `SID`, `UIS`, `AADID`, `OktaId`, and `AWSId`. | | <a name="srcusername"></a>**SrcUsername** | Optional | String | The Source username, including domain information when available. Use one of the following formats and in the following order of priority:<br>- **Upn/Email**: `johndow@contoso.com`<br>- **Windows**: `Contoso\johndow`<br>- **DN**: `CN=Jeff Smith,OU=Sales,DC=Fabrikam,DC=COM`<br>- **Simple**: `johndow`. Use the Simple form only if domain information is not available.<br><br>Store the Username type in the [SrcUsernameType](#srcusernametype) field. If other IDs are available, we recommend that you normalize the field names to **SrcUserUpn**, **SrcUserWindows** and **SrcUserDn**.<br><br>For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `AlbertE` | | <a name="user"></a>**User** | Alias | | Alias to [SrcUsername](#srcusername) |
The fields below are specific to DNS events, although many are similar to fields
| **SrcUserDomain** | Optional | String | This field is kept for backward compatibility only. ASIM requires domain information, if available, to be part of the [SrcUsername](#srcusername) field. | | <a name="srcprocessname"></a>**SrcProcessName** | Optional | String | The file name of the process initiating the DNS request. This name is typically considered to be the process name. <br><br>Example: `C:\Windows\explorer.exe` | | <a name="process"></a>**Process** | Alias | | Alias to the [SrcProcessName](#srcprocessname) <br><br>Example: `C:\Windows\System32\rundll32.exe`|
-| **SrcProcessId**| Mandatory | String | The process ID (PID) of the process initiating the DNS request.<br><br>Example: `48610176` <br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows and Linux this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
+| **SrcProcessId**| Optional | String | The process ID (PID) of the process initiating the DNS request.<br><br>Example: `48610176` <br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows and Linux this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
| **SrcProcessGuid** | Optional | String | A generated unique identifier (GUID) of the the process initiating the DNS request. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` | | <a name="dst"></a>**Dst** | Recommended | String | A unique identifier of the server receiving the DNS request. <br><br>This field may alias the [DstDvcId](#dstdvcid), [DstHostname](#dsthostname), or [DstIpAddr](#dstipaddr) fields. <br><br>Example: `192.168.12.1` | | <a name="dstipaddr"></a>**DstIpAddr** | Optional | IP Address | The IP address of the server receiving the DNS request. For a regular DNS request, this value would typically be the reporting device, and in most cases set to `127.0.0.1`.<br><br>Example: `127.0.0.1` |
The fields below are specific to DNS events, although many are similar to fields
| **DstDvcIdType** | Optional | Enumerated | The type of [DstDvcId](#dstdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEidIf`<br><br>If multiple IDs are available, use the first one from the list above, and store the others in the **DstDvcAzureResourceId** or **DstDvcMDEid** fields, respectively.<br><br>Required if **DstDeviceId** is used.| | **DstDeviceType** | Optional | Enumerated | The type of the destination device. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other` | | <a name=query></a>**DnsQuery** | Mandatory | FQDN | The domain that needs to be resolved. <br><br>**Note**: Some sources send this query in different formats. For example, in the DNS protocol itself, the query includes a dot (**.**)at the end, which must be removed.<br><br>While the DNS protocol allows for multiple queries in a single request, this scenario is rare, if it's found at all. If the request has multiple queries, store the first one in this field, and then and optionally keep the rest in the [AdditionalFields](normalization-about-schemas.md#additionalfields) field.<br><br>Example: `www.malicious.com` |
-| **Domain** | Alias | | Alias to [Query](#query). |
+| **Domain** | Alias | | Alias to [DnsQuery](#query). |
| **DnsQueryType** | Optional | Integer | This field may contain [DNS Resource Record Type codes](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `28`| | **DnsQueryTypeName** | Recommended | Enumerated | The field may contain [DNS Resource Record Type](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml) names. <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case as needed. If the source provides only a numerical query type code and not a query type name, the parser must include a lookup table to enrich with this value.<br><br>Example: `AAAA`|
-| <a name=responsename></a>**DnsResponseName** | Optional | String | The content of the response, as included in the record.<br> <br> The DNS response data is inconsistent across reporting devices, is complex to parse, and has less value for source agnostics analytics. Therefore the information model does not require parsing and normalization, and Microsoft Sentinel uses an auxiliary function to provide response information. For more information, see [Handling DNS response](#handling-dns-response).|
+| <a name=responsename></a>**DnsResponseName** | Optional | String | The content of the response, as included in the record.<br> <br> The DNS response data is inconsistent across reporting devices, is complex to parse, and has less value for source-agnostic analytics. Therefore the information model does not require parsing and normalization, and Microsoft Sentinel uses an auxiliary function to provide response information. For more information, see [Handling DNS response](#handling-dns-response).|
| <a name=responsecodename></a>**DnsResponseCodeName** | Mandatory | Enumerated | The [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. <br><br>Example: `NXDOMAIN` | | **DnsResponseCode** | Optional | Integer | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `3`| | **TransactionIdHex** | Recommended | String | The DNS unique hex transaction ID. |
You can also provide an extra KQL function called `_imDNS<vendor>Flags_`, which
For more information, see: -- [Normalization in Microsoft Sentinel](normalization.md)-- [Advanced SIEM Information Model schemas](normalization-about-schemas.md)-- [Advanced SIEM Information Model parsers](normalization-about-parsers.md)-- [Advanced SIEM Information Model content](normalization-content.md)
+- Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel False Positives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/false-positives.md
To add an automation rule to handle a false positive:
## Add exceptions by modifying analytics rules
-Another option for implementing exceptions is to modify the analytics rule query. You can include exceptions directly in the rule, or preferably, when possible, use a reference to a [watchlist](watchlists.md#use-watchlists-in-analytics-rules). You can then manage the exception list in the watchlist.
+Another option for implementing exceptions is to modify the analytics rule query. You can include exceptions directly in the rule, or preferably, when possible, use a reference to a [watchlist](watchlists-queries.md#create-an-analytics-rule-with-a-watchlist). You can then manage the exception list in the watchlist.
### Modify the query
sentinel File Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/file-event-normalization-schema.md
Microsoft Sentinel provides the following built-in, product-specific file event
- **Microsoft 365 Defender for Endpoint file events** - **Azure Storage**, including Blob, File, Queue, and Table Storage.
-To use the source-agnostic parser that unifies all of the built-in parsers, and ensure that your analysis runs across all the configured sources, use imFileEvent as the table name in your query.
+To use the unifying parser that unifies all of the built-in parsers, and ensure that your analysis runs across all the configured sources, use imFileEvent as the table name in your query.
-Deploy the [source-agnostic and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelFileEvent).
+Deploy the [unifying and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelFileEvent).
## Add your own normalized parsers When implementing custom parsers for the File Event information model, name your KQL functions using the following syntax: `imFileEvent<vendor><Product`.
-Add your KQL function to the `imFileEvent` source-agnostic parser to ensure that any content using the File Event model also uses your new parser.
+Add your KQL function to the `imFileEvent` unifying parser to ensure that any content using the File Event model also uses your new parser.
## Normalized content for file activity data
The path should be normalized to match one of the following formats. The format
For more information, see: -- [Normalization in Microsoft Sentinel](normalization.md)-- [Microsoft Sentinel authentication normalization schema reference (Public preview)](authentication-normalization-schema.md)-- [Microsoft Sentinel DNS normalization schema reference](dns-normalization-schema.md)-- [Microsoft Sentinel network normalization schema reference](./network-normalization-schema.md)-- [Microsoft Sentinel process event normalization schema reference (Public preview)](process-events-normalization-schema.md)-- [Microsoft Sentinel registry event normalization schema reference (Public preview)](registry-event-normalization-schema.md)
+- Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Multiple Workspace View https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/multiple-workspace-view.md
Title: Work with Microsoft Sentinel incidents in many workspaces at once | Micro
description: How to view incidents in multiple workspaces concurrently in Microsoft Sentinel. Previously updated : 01/09/2022 Last updated : 01/11/2022
To take full advantage of Microsoft SentinelΓÇÖs capabilities, Microsoft recomme
When you open Microsoft Sentinel, you are presented with a list of all the workspaces to which you have access rights, across all selected tenants and subscriptions. To the left of each workspace name is a checkbox. Selecting the name of a single workspace will bring you into that workspace. To choose multiple workspaces, select all the corresponding checkboxes, and then select the **View incidents** button at the top of the page. > [!IMPORTANT]
-> Multiple workspace view currently supports a maximum of 10 concurrently displayed workspaces.
+> Multiple Workspace View currently supports a maximum of 30 concurrently displayed workspaces.
>
-> If you check more than 10 workspaces, a warning message will appear.
Note that in the list of workspaces, you can see the directory, subscription, location, and resource group associated with each workspace. The directory corresponds to the tenant.
sentinel Near Real Time Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/near-real-time-rules.md
The following limitations currently govern the use of NRT rules:
1. Queries can run only within a single workspace. There is no cross-workspace capability.
- 1. There is no event grouping. NRT rules produce a single alert that groups all the applicable events.
+ 1. Event grouping is not configurable. NRT rules produce a single alert that groups all the applicable events.
## Next steps
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/network-normalization-schema.md
The network normalization schema can represent any type of an IP network session
## Parsers
-This section discusses parsers, how to add parsers, and how to filter parser parameters.
+This section discusses parsers, how to add parsers, and how to filter parser parameters. For more information, see [ASIM parsers](normalization-parsers-overview.md) and [Use ASIM parsers](normalization-about-parsers.md).
-### Source-agnostic parsers
+### Unifying parsers
-To use the source-agnostic parsers that unify all of the out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the following KQL functions as the table name in your query:
-
-| Name | Description | Usage instructions |
-| - | | |
-| <a name="imnetworksession"></a>**imNetworkSession** | Aggregative parser that uses *union* to include normalized events from all *network session* sources. |- Update this parser if you want to add or remove sources from source-agnostic analytics. <br><br>- Use this function in your source-agnostic queries.|
-| **ASimNetworkSession** | Similar to the [imNetworkSession](#imnetworksession) function, but without parameter support, so it doesn't force the **Logs** page time picker to use the `custom` value. |- Update these parsers if you want to add or remove sources from source-agnostic analytics.<br><br>- Use this function in your source-agnostic queries if you don't plan to use parameters.|
-| **vimNetworkSession\<vendor\>\<product\>** | Source-specific parsers implement normalization for a specific source. <br><br>Example: `vimNetworkSessionSysmonLinux` |- Add a source-specific parser for a source when there's no out-of-the-box normalizing parser. Update the `im` aggregative parser to include reference to your new parser. <br><br>- Update a source-specific parser to resolve parsing and normalization issues.<br><br>- Use a source-specific parser for source-specific analytics.|
- **ASimNetworkSession\<vendor\>\<product\>>** | Source-specific parsers implement normalization for a specific source. <br><br>Unlike the `vim*` functions, the `ASim*` functions don't support parameters. |- Add a source-specific parser for a source when there's no out-of-the-box normalizing parser. Update the aggregative `ASim` parser to include reference to your new parser.<br><br>- Update a source-specific parser to resolve parsing and normalization issues.<br><br>- Use an `ASim` source-specific parser for interactive queries when not using parameters.|
-| | | |
+To use the unifying parsers that unify all of the out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the following KQL functions as the table name in your query.
Deploy ASIM parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM).
+#### <a name="imnetworksession"></a>imNetworkSession
+
+Aggregative parser that uses *union* to include normalized events from all *network session* sources.
+
+- Update this parser if you want to add or remove sources from source-agnostic analytics.
+- Use this function in your source-agnostic queries.
+
+#### ASimNetworkSession
+
+Similar to the [imNetworkSession](#imnetworksession) function, but without parameter support, so it doesn't force the **Logs** page time picker to use the `custom` value.
+
+- Update these parsers if you want to add or remove sources from source-agnostic analytics.
+- Use this function in your source-agnostic queries if you don't plan to use parameters.
+
+#### vimNetworkSession\<vendor\>\<product\>
+
+Source-specific parsers implement normalization for a specific source.
+
+Example: `vimNetworkSessionSysmonLinux`
+
+- Add a source-specific parser for a source when there's no out-of-the-box normalizing parser. Update the `im` aggregative parser to include reference to your new parser.
+- Update a source-specific parser to resolve parsing and normalization issues.
+- Use a source-specific parser for source-specific analytics.
+
+#### ASimNetworkSession\<vendor\>\<product\>
+
+Source-specific parsers implement normalization for a specific source.
+
+Unlike the `vim*` functions, the `ASim*` functions don't support parameters.
+
+- Add a source-specific parser for a source when there's no out-of-the-box normalizing parser. Update the aggregative `ASim` parser to include reference to your new parser.
+- Update a source-specific parser to resolve parsing and normalization issues.
+- Use an `ASim` source-specific parser for interactive queries when not using parameters.
+++ ### Out-of-the-box, source-specific parsers Microsoft Sentinel provides the following built-in, product-specific Network Session parsers: | **Name** | **Description** | | | |
-| **Microsoft 365 Defender for Endpoint** | - Parametrized: vimNetworkSessionMicrosoft365Defender <br> - Regular: ASimNetworkSessionMicrosoft365Defender |
-| **Microsoft Defender for IoT - Endpoint (MD4IoT)** | - Parametrized: vimNetworkSessionMD4IoT <br> - Regular: ASimNetworkSessionMD4IoT |
-| **Microsoft Sysmon for Linux** | - Parametrized: vimNetworkSessionSysmonLinux<br> - Regular: ASimNetworkSessionSysmonLinux |
-| **Windows Events Firewall** | Windows firewall activity as represented by using Windows Events 515x, collected by using either the Log Analytics Agent or the Azure Monitor Agent into either the `Event` table or the `WindowsEvent` table.<br><br> - Parametrized: vimNetworkSessionMicrosoftWindowsEventFirewall <br> - Regular: ASimNetworkSessionMicrosoftWindowsEventFirewall
+| **Microsoft 365 Defender for Endpoint** | - Parametrized: `vimNetworkSessionMicrosoft365Defender` <br> - Regular: `ASimNetworkSessionMicrosoft365Defender` |
+| **Microsoft Defender for IoT - Endpoint (MD4IoT)** | - Parametrized: `vimNetworkSessionMD4IoT` <br> - Regular: `ASimNetworkSessionMD4IoT` |
+| **Microsoft Sysmon for Linux** | - Parametrized: `vimNetworkSessionSysmonLinux`<br> - Regular: `ASimNetworkSessionSysmonLinux` |
+| **Windows Events Firewall** | Windows firewall activity as represented by using Windows Events 515x, collected by using either the Log Analytics Agent or the Azure Monitor Agent into either the `Event` table or the `WindowsEvent` table.<br><br> - Parametrized: `vimNetworkSessionMicrosoftWindowsEventFirewall` <br> - Regular: `ASimNetworkSessionMicrosoftWindowsEventFirewall`
| | | ### Add your own normalized parsers
imNetworkSession (hostname_has_any = torProxies)
The Network Session information model is aligned with the [OSSEM Network entity schema](https://github.com/OTRF/OSSEM/blob/master/docs/cdm/entities/network.md).
-To conform with industry best practices, the Network Session schema uses the descriptors **Src** and **Dst** to identify the network session source and destination devices, without including the token **Dvc** in the field name.
+To conform with industry best practices, the Network Session schema uses the descriptors `Src` and `Dst` to identify the network session source and destination devices, without including the token `Dvc` in the field name.
-So, for example, the source device hostname and IP address are named **SrcHostname** and **SrcIpAddr**, respectively, and not **Src*Dvc*Hostname** and **Src*Dvc*IpAddr**. The prefix **Dvc** is only used for the reporting or intermediary device, as applicable.
+So, for example, the source device hostname and IP address are named `SrcHostname` and `SrcIpAddr`, respectively, and not `Src*Dvc*Hostname` and `Src*Dvc*IpAddr`. The prefix `Dvc` is only used for the reporting or intermediary device, as applicable.
-Fields that describe the user and application associated with the source and destination devices also use the **Src** and **Dst** descriptors.
+Fields that describe the user and application associated with the source and destination devices also use the `Src` and `Dst` descriptors.
-Other ASIM schemas typically use **Target** instead of **Dst**.
+Other ASIM schemas typically use `Target` instead of `Dst`.
### Common fields
The following fields are common to all network session activity logging:
| <a name="dstdomaintype"></a>**DstDomainType** | Recommended | Enumerated | The type of [DstDomain](#dstdomain), if known. Possible values include:<br>- `Windows (contoso\mypc)`<br>- `FQDN (docs.microsoft.com)`<br><br>Required if [DstDomain](#dstdomain) is used. | | **DstFQDN** | Optional | String | The destination device hostname, including domain information when available. <br><br>Example: `Contoso\DESKTOP-1282V4D` <br><br>**Note**: This field supports both traditional FQDN format and Windows domain\hostname format. The [DstDomainType](#dstdomaintype) reflects the format used. | | <a name="dstdvcid"></a>**DstDvcId** | Optional | String | The ID of the destination device as reported in the record.<br><br>Example: `ac7e9755-8eae-4ffc-8a02-50ed7a2216c3` |
-| **DstDvcIdType** | Optional | Enumerated | The type of [DstDvcId](#dstdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEidIf`<br><br>If multiple IDs are available, use the first one from the preceding list, and store the others in the **DstDvcAzureResourceId** or **DstDvcMDEid** fields, respectively.<br><br>Required if **DstDeviceId** is used.|
+| **DstDvcIdType** | Optional | Enumerated | The type of [DstDvcId](#dstdvcid), if known. Possible values include:<br> - `AzureResourceId`<br>- `MDEid`<br><br>If multiple IDs are available, use the first one from the preceding list, and store the others in the **DstDvcAzureResourceId** or **DstDvcMDEid** fields, respectively.<br><br>Required if **DstDeviceId** is used.|
| **DstDeviceType** | Optional | Enumerated | The type of the destination device. Possible values include:<br>- `Computer`<br>- `Mobile Device`<br>- `IOT Device`<br>- `Other` | | <a name="dstuserid"></a>**DstUserId** | Optional | String | A machine-readable, alphanumeric, unique representation of the destination user. <br><br>Supported formats and types include:<br>- **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br>- **UID** (Linux): `4578`<br>- **AADID** (Azure Active Directory): `9267d02c-5f76-40a9-a9eb-b686f3ca47aa`<br>- **OktaId**: `00urjk4znu3BcncfY0h7`<br>- **AWSId**: `72643944673`<br><br>Store the ID type in the [DstUserIdType](#dstuseridtype) field. If other IDs are available, we recommend that you normalize the field names to **DstUserSid**, **DstUserUid**, **DstUserAADID**, **DstUserOktaId**, and **UserAwsId**, respectively. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `S-1-12` | | <a name="dstuseridtype"></a>**DstUserIdType** | Optional | Enumerated | The type of the ID stored in the [DstUserId](#dstuserid) field. <br><br>Supported values are `SID`, `UIS`, `AADID`, `OktaId`, and `AWSId`. |
These are the changes in version 0.2.1 of the schema:
For more information, see: -- [Normalization in Microsoft Sentinel](normalization.md)-- [Advanced SIEM Information Model schemas](normalization-about-schemas.md)-- [Advanced SIEM Information Model parsers](normalization-about-parsers.md)-- [Advanced SIEM Information Model content](normalization-content.md)
+- Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Normalization About Parsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-about-parsers.md
Title: Advanced SIEM Information Model (ASIM) Parsers | Microsoft Docs
+ Title: Use Advanced SIEM Information Model (ASIM) parsers | Microsoft Docs
description: This article explains how to use KQL functions as query-time parsers to implement the Advanced SIEM Information Model (ASIM)
-# Advanced SIEM Information Model (ASIM) parsers (Public preview)
+# Use Advanced SIEM Information Model (ASIM) parsers (Public preview)
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-In Microsoft Sentinel, parsing and [normalizing](normalization.md) happen at query time. Parsers are built as [KQL user-defined functions](/azure/data-explorer/kusto/query/functions/user-defined-functions) that transform data in existing tables, such as **CommonSecurityLog**, custom logs tables, or Syslog, into the normalized schema. Once the parser is saved as a workspace function, it can be used like any Microsoft Sentinel table.
+Use Advanced SIEM Information Model (ASIM) parsers instead of table names in your Microsoft Sentinel queries to view data in a normalized format and to include all data relevant to the schema in your query. Refer to the table below to find the relevant parser for each schema.
-> [!TIP]
-> Also watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM). For more information, see [Next steps](#next-steps).
->
+To understand how parsers fit within the ASIM architecture, refer to the [ASIM architecture diagram](normalization.md#asim-components).
> [!IMPORTANT] > ASIM is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
-## Source agnostic and source-specific parsers
-
-ASIM includes two levels of parsers: **source-agnostic** and **source-specific** parsers:
-### Source-agnostic parsers
+## Unifying parsers
-A **source-agnostic parser** combines all the sources normalized to the same schema and can be used to query all of them using normalized fields. The source agnostic parser name is `im<schema>`, where `<schema>` stands for the specific schema it serves.
+When using ASIM in your queries, use **unifying parsers** to combine all sources, normalized to the same schema, and query them using normalized fields. The unifying parser name is `_Im_<schema>` for built-in parsers and `im<schema>` for workspace deployed parsers, where `<schema>` stands for the specific schema it serves.
-For example, the following query uses the source-agnostic DNS parser to query DNS events using the `ResponseCodeName`, `SrcIpAddr`, and `TimeGenerated` normalized fields:
+For example, the following query uses the built-in unifying DNS parser to query DNS events using the `ResponseCodeName`, `SrcIpAddr`, and `TimeGenerated` normalized fields:
```kusto
-imDns
+_Im_Dns
| where isnotempty(ResponseCodeName) | where ResponseCodeName =~ "NXDOMAIN" | summarize count() by SrcIpAddr, bin(TimeGenerated,15m) ```
-A source-agnostic parser can combine several source-specific normalized parsers using the `union` KQL operator. The name of a source-specific normalized parser is `vim<schema><vendor><product>`. Therefore, the `imDns` parser looks as follows:
-
-```kusto
-union isfuzzy=true
-vimDnsEmpty,
-vimDnsCiscoUmbrella,
-vimDnsInfobloxNIOS,
-vimDnsMicrosoftOMS
-```
- > [!NOTE]
-> When using the ASIM source-agnostic parsers, which start with `im` in the **Logs** page, the time range selector is set to `custom`. You can still set the time range yourself. Alternatively, specify the time range using parser parameters.
+> When using the ASIM unifying filtering parsers in the **Logs** page, the time range selector is set to `custom`. You can still set the time range yourself. Alternatively, specify the time range using parser parameters.
>
-> Alternately, use an `ASim` parser, which does not support parameters, and also does not set the time-range picker to `custom` by default.
+> Alternately, use the parameter-less parsers, which start with `_ASim_` for built-in parsers and `ASim` for workspace deployed parsers. Those parsers do not set the time-range picker to `custom` by default.
>
-### Source-specific parsers
-
-Adding **source-specific** normalized parsers to the source-agnostic parser enables you to include custom sources in built-in queries that use the source agnostic parsers.
-
-Source-specific parsers enable you to get immediate value from built-in content, such as analytics, workbooks, insights for your custom data.
-
-The source-specific parsers can also be used independently. For example, in an Infoblox-specific workbook, use the `vimDnsInfobloxNIOS` parser.
-
-## <a name="optimized-parsers"></a>Optimizing parsing using parameters
-
-Using parsers may impact your query performance, primarily from having to filter the results after parsing. For this reason, many parsers have optional filtering parameters, which enable you to filter before parsing and enhance query performance. Together with query optimization and pre-filtering efforts, ASIM parsers often provide better performance when compared to not using normalization at all.
-
-Use filtering parameters by adding one or more named parameters when invoking the parser. For example, the following query start ensures that only DNS queries for non-existent domains are returned:
-
-```kusto
-imDns(responsecodename='NXDOMAIN')
-```
-
-The previous example is similar to the following query, but is much more efficient.
-
-```kusto
-imDns | where ResponseCodeName == 'NXDOMAIN'
-```
-
-Each schema has a standard set of filtering parameters which are documented in the schema doc. Filtering parameters are entirely optional and are currently fully supported only for the DNS schema. Other schemas support standard filtering parameters without pre-filtering optimization.
-
-## Writing source-specific parsers
-
-A parser is a KQL query saved as a workspace function. Once saved, it can be used like built-in tables. The parser query includes the following parts:
-
-**Filter** > **Parse** > **Prepare fields**
-
-### Filtering
-
-#### Filtering the relevant records
-
-In many cases, a table includes multiple types of events. For example:
-* The Syslog table has data from multiple sources.
-* Custom tables may include information from a single source that provides more than one event type and can fit various schemas.
-
-Therefore, a parser should first filter only the records that are relevant to the target schema.
-
-Filtering in KQL is done using the `where` operator. For example, **Sysmon event 1** reports process creation and should be normalized to the **ProcessEvent** schema. The **Sysmon event 1** event is part of the `Event` table, and the following filter should be used:
-
-```kusto
-Event | where Source == "Microsoft-Windows-Sysmon" and EventID == 1
-```
-
-#### Filtering based on parser parameters
-
-When using [parameterized parsers](#optimized-parsers), make sure that your parser accepts the filtering parameters for the relevant schema, as documented in the reference article for that schema.
-
-The function article is identical for each schema. For example, for the DNS query parameterized parser signature:
+The following table lists unifying parsers available:
-```kusto
-let DNSQuery_MS=(
- starttime:datetime=datetime(null),
- endtime:datetime=datetime(null),
- srcipaddr:string='*',
- domain_has_any:dynamic=dynamic([]),
- responsecodename:string='*',
- dnsresponsename:string='*',
- response_has_any_prefix:dynamic=dynamic([]),
- eventtype:string='lookup'
- )
-```
+| Schema | Built-in filtering parser | Built-in parameter-less parser | workspace deployed filtering parser | workspace deployed parameter-less parser |
+| | - | | -- | |
+| Authentication | | | imAuthentication | ASimAuthentication |
+| Dns | _Im_Dns | _ASim_Dns | imDns | ASimDns |
+| File Event | | | | imFileEvent |
+| Network Session | | | imNetworkSession | ASimNetworkSession |
+| Process Event | | | | - imProcess<br> - imProcessCreate<br> - imProcessTerminate |
+| Registry Event | | | | imRegistry |
+| Web Session | | | imWebSession | ASimWebSession |
+| | | | |
-Add your filters, based on your parameter values. When filtering, make sure that you:
-- **Filter before parsing using physical fields**. If the filtered results are not accurate enough, repeat the test after parsing to fine-tune your results. For more information, see ["filtering optimization"](#optimization).
+## Source-specific parsers
-``` kusto
-srcipaddr=='*' or ClientIP==srcipaddr
-array_length(domain_has_any) == 0 or Name has_any (domain_has_any)
-```
-
-> [!TIP]
-> An existing parser of the same type is a great starting for implementing parameter filtering.
->
+Unifying parsers use Source-specific parsers to handle the unique aspects of each source. However, source-specific parsers can also be used independently. For example, in an Infoblox-specific workbook, use the `vimDnsInfobloxNIOS` source-specific parser.
-#### <a name="optimization"></a>Filtering optimization
--
-To ensure the performance of the parser, note the following filtering recommendations:
--- Always filter on built-in rather than parsed fields. While it's sometimes easier to filter using parsed fields, it has a dramatic impact on performance.-- Use operators that provide optimized performance. In particular, `==`, `has`, and `startswith`. Using operators such as `contains` or `matches regex` also dramatically impacts performance.
+## <a name="optimized-parsers"></a>Optimizing parsing using parameters
-Filtering recommendations for performance may not always be trivial to follow. For example, using `has` is less accurate than `contains`. In other cases, matching the built-in field, such as `SyslogMessage`, is less accurate than comparing an extracted field, such as `DvcAction`. In such cases, we recommend that you still pre-filter using a performance-optimizing operator over a built-in field, and repeat the filter using more accurate conditions after parsing.
+Using parsers may impact your query performance, primarily from filtering the results after parsing. For this reason, many parsers have optional filtering parameters, which enable you to filter before parsing and enhance query performance. With query optimization and pre-filtering efforts, ASIM parsers often provide better performance when compared to not using normalization at all.
-For an example, see the following [Infoblox DNS](https://aka.ms/AzSentinelInfobloxParser) parser snippet. The parser first checks that the SyslogMessage field `has` the word `client`. However, the term might be used in a different place in the message. Therefore, after parsing the `Log_Type` field, the parser checks again that the word `client` was indeed the field's value.
+When invoking the parser, use filtering parameters by adding one or more named parameters. For example, the following query start ensures that only DNS queries for non-existent domains are returned:
```kusto
-Syslog | where ProcessName == "named" and SyslogMessage has "client"
-…
- | extend Log_Type = tostring(Parser[1]),
- | where Log_Type == "client"
+_Im_Dns(responsecodename='NXDOMAIN')
```
-> [!NOTE]
-> Parsers should not filter by time, as the query using the parser already filters for time.
->
-
-### Parsing
-
-Once the query selects the relevant records, it may need to parse them. Typically, parsing is needed if much of the event information is conveyed in a single text field.
-
-The KQL operators that perform parsing are listed below, ordered by their performance optimization. The first provides the most optimized performance, while the last provides the least optimized performance.
-
-|Operator |Description |
-|||
-|[split](/azure/data-explorer/kusto/query/splitfunction) | Parse a string of values delimited by a delimiter |
-|[parse_csv](/azure/data-explorer/kusto/query/parsecsvfunction) | Parse a string of values formatted as a CSV (comma-separated values) line. |
-|[parse](/azure/data-explorer/kusto/query/parseoperator) | Parse multiple values from an arbitrary string using a pattern, which can be a simplified pattern with better performance, or a regular expression. |
-|[extract_all](/azure/data-explorer/kusto/query/extractallfunction) | Parse single values from an arbitrary string using a regular expression. `extract_all` has a similar performance to `parse` if the latter uses a regular expression. |
-|[extract](/azure/data-explorer/kusto/query/extractfunction) | Extract a single value from an arbitrary string using a regular expression. <br><br>Using `extract` provides better performance than `parse` or `extract_all` if a single value is needed. However, using multiple activations of `extract` over the same source string is less efficient than a single `parse` or `extract_all` and should be avoided. |
-|[parse_json](/azure/data-explorer/kusto/query/parsejsonfunction) | Parse the values in a string formatted as JSON. If only a few values are needed from the JSON, using `parse`, `extract`, or `extract_all` provides better performance. |
-|[parse_xml](/azure/data-explorer/kusto/query/parse-xmlfunction) | Parse the values in a string formatted as XML. If only a few values are needed from the XML, using `parse`, `extract`, or `extract_all` provides better performance. |
-| | |
-
-In addition to parsing string, the parsing phase may require more processing of the original values, including:
--- **Formatting and type conversion**. The source field, once extracted, may need to be formatted to fit the target schema field. For example, you may need to convert a string representing date and time to a datetime field. Functions such as `todatetime` and `tohex` are helpful in these cases.--- **Value lookup**. The value of the source field, once extracted, may need to be mapped to the set of values specified for the target schema field. For example, some sources report numeric DNS response codes, while the schema mandates the more common text response codes. The functions `iff` and `case` can be helpful to map a few values.-
- For example, the Microsoft DNS parser assigns the `EventResult` field based on the Event ID and Response Code using an `iff` statement, as follows:
-
- ```kusto
- extend EventResult = iff(EventId==257 and ResponseCode==0 ,'Success','Failure')
- ```
-
- For several values, use `datatable` and `lookup`, as demonstrated in the same DNS parser:
-
- ```kusto
- let RCodeTable = datatable(ResponseCode:int,ResponseCodeName:string) [ 0, 'NOERROR', 1, 'FORMERR'....];
- ...
- | lookup RCodeTable on ResponseCode
- | extend EventResultDetails = case (
- isnotempty(ResponseCodeName), ResponseCodeName,
- ResponseCode between (3841 .. 4095), 'Reserved for Private Use',
- 'Unassigned')
- ```
-
-> [!NOTE]
-> The transformation does not allow using only `lookup`, as multiple values are mapped to `Reserved for Private Use` or `Unassigned`, and therefore the query uses both lookup and case.
-> Even so, the query is still much more efficient than using `case` for all values.
->
-
-### Prepare fields in the result set
-
-The parser has to prepare the results set fields to ensure that the normalized fields are used. As a guideline, original fields that are not normalized should not be removed from the result set unless there is a compelling reason to do so, such as if they create confusion.
-
-The following KQL operators are used to prepare fields:
-
-|Operator | Description | When to use in a parser |
-||||
-|**extend** | Creates calculated fields and adds them to the record | `Extend` is used if the normalized fields are parsed or transformed from the original data. For more information, see the example in the [Parsing](#parsing) section above. |
-|**project-rename** | Renames fields | If a field exists in the actual event and only needs to be renamed, use `project-rename`. <br><br>The renamed field still behaves like a built-in field, and operations on the field have much better performance. |
-|**project-away** | Removes fields. |Use `project-away` for specific fields that you want to remove from the result set. |
-|**project** | Selects fields that existed before or were created as part of the statement. Removes all other fields. | Not recommended for use in a parser, as the parser should not remove any other fields that are not normalized. <br><br>If you need to remove specific fields, such as temporary values used during parsing, use `project-away` to remove them from the results. |
-| | | |
-
-### Handle parsing variants
-
-In many cases, events in an event stream include variants that require different parsing logic.
-
-It's often tempting to build a parser from different subparsers, each handling another variant of the event that needs different parsing logic. Those subparsers, each a query by itself, are then unified using the `union` operator. This approach, while convenient, is *not* recommended as it significantly impacts the performance of the parser.
-
-When handling variants, use the following guidelines:
-
-|Scenario |Handling |
-|||
-|The different variants represent *different* event types, commonly mapped to different schemas | Use separate parsers |
-|The different variants represent the *same* event type but are structured differently. | If the variants are known, such as when there is a method to differentiate between the events before parsing, use the `case` operator to select the correct `extract_all` to run and field mapping, as demonstrated in the [Infoblox DNS parser](https://aka.ms/AzSentinelInfobloxParser). |
-|If `union` is unavoidable | When using `union` is unavoidable, make sure to use the following guidelines:<br><br>- Pre-filter using built-in fields in each one of the subqueries. <br>- Ensure that the filters are mutually exclusive. <br>- Consider not parsing less critical information, reducing the number of subqueries. |
-| | |
-
-## <a name="include"></a>Add your parser to the schema source-agnostic parser
-
-Normalization allows you to use your own content and built-in content with your custom data.
-
-For example, if you have a custom connector that receives DNS query activity log, you can ensure that the DNS query activity logs take advantage of any normalized DNS content.
-
-To do that, modify the relevant source agnostic parser to include the source-specific parser you created. For example, change the `imDns` source-agnostic parser to include your parser by adding your parser to the list of parsers in the `union` statement. If your parser supports parameters, include it with the parameter signature, like the vimDnsYyyXxx parser below. If it doesn't, include it without a signature, like the vimDnsWwwZzz parer below.
+The previous example is similar to the following query but is much more efficient.
```kusto
-let DnsGeneric=(starttime:datetime=datetime(null), endtime:datetime=datetime(null)... ){
- union isfuzzy=true
- vimDnsEmpty,
- vimDnsCiscoUmbrella (starttime, endtime, srcipaddr...),
- vimDnsInfobloxNIOS (starttime, endtime, srcipaddr...),
- vimDnsMicrosoftOMS (starttime, endtime, srcipaddr...),
- vimDnsYyyXxx (starttime, endtime, srcipaddr...),
- vimDnsWwwZzz
- };
- DnsGeneric( starttime, endtime, srcipaddr...)
+_Im_Dns | where ResponseCodeName == 'NXDOMAIN'
```
+Each schema has a standard set of filtering parameters documented in the relevant schema documentation. Filtering parameters are entirely optional. The following schemas support filtering parameters:
+- [Authentication](authentication-normalization-schema.md)
+- [DNS](dns-normalization-schema.md#filtering-parser-parameters)
+- [Network Session](network-normalization-schema.md#filtering-parser-parameters)
+- [Web Session](web-normalization-schema.md#filtering-parser-parameters)
-## Deploy parsers
-
-Deploy parsers manually by copying them to the Azure Monitor Log page and saving your change. This method is useful for testing. For more information, see [Create a function](../azure-monitor/logs/functions.md).
-
-However, to deploy a large number of parsers, we recommend that you use an ARM template. For example, you may want to use an ARM template when deploying a complete normalization solution that includes a source-agnostic parser and several source-specific parsers, or when deploying multiple parsers for different schemas for a source.
-For more information, see the [generic parser ARM template](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/ARM-Templates/ParserQuery). Use this template as a starting point and deploy your parser by pasting it in at the relevant point during the template deployment process. For example, see the [DNS parsers ARM template](https://github.com/Azure/Azure-Sentinel/tree/master/Parsers/ASimDns/ARM).
-
-> [!TIP]
-> ARM templates can include different resources, so your parsers can be deployed alongside connectors, analytic rules, or watchlists, to name a few useful options. For example, your parser can reference a watchlist that will be deployed alongside it.
->
+## <a name="next-steps"></a>Next steps
+This article discusses the Advanced SIEM Information Model (ASIM) parsers. To learn how to develop your own parsers, see [Develop ASIM parsers](normalization-develop-parsers.md).
-## <a name="next-steps"></a>Next steps
+Learn more about ASIM parsers:
-This article discusses the Advanced SIEM Information Model (ASIM) parsers.
+- [ASIM parsers overview](normalization-parsers-overview.md)
+- [Manage ASIM parsers](normalization-manage-parsers.md)
+- [Develop custom ASIM parsers](normalization-develop-parsers.md)
-For more information, see:
+Learn more about the ASIM in general:
- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)-- [Advanced SIEM Information Model overview](normalization.md)-- [Advanced SIEM Information Model schemas](normalization-about-schemas.md)-- [Advanced SIEM Information Model content](normalization-content.md)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-about-schemas.md
An Advanced SIEM Information Model ([ASIM](normalization.md)) schema is a set of fields that represent an activity. Using the fields from a normalized schema in a query ensures that the query will work with every normalized source.
+To understand how schemas fit within the ASIM architecture, refer to the [ASIM architecture diagram](normalization.md#asim-components).
+ Schema references outline the fields that comprise each schema. ASIM currently defines the following schemas: - [Network Session](normalization-schema.md)
Based on these entities, [Windows event 4624](/windows/security/threat-protectio
This article provides an overview of normalization in Microsoft Sentinel and ASIM. For more information, see:-- [Advanced SIEM Information Model overview](normalization.md)-- [Advanced SIEM Information Model parsers](normalization-about-parsers.md)-- [Advanced SIEM Information Model content](normalization-content.md)
+- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Normalization Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-content.md
Title: Advanced SIEM Information Model (ASIM) content | Microsoft Docs
-description: This article outlines the Microsoft Sentinel content that utilized Advanced SIEM Information Model (ASIM)
+ Title: Advanced SIEM Information Model (ASIM) security content | Microsoft Docs
+description: This article outlines the Microsoft Sentinel security content that uses the Advanced SIEM Information Model (ASIM).
Last updated 11/09/2021
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Normalized security content in Microsoft Sentinel includes analytics rules, hunting queries, and workbooks that work with source-agnostic normalization parsers.
+Normalized security content in Microsoft Sentinel includes analytics rules, hunting queries, and workbooks that work with unifying normalization parsers.
<a name="builtin"></a>You can find normalized, built-in content in Microsoft Sentinel galleries and [solutions](sentinel-solutions-catalog.md), create your own normalized content, or modify existing content to use normalized data.
-This article lists built-in Microsoft Sentinel content that has been configured to support ASIM. While links to the Microsoft Sentinel GitHub repository are provided below as a reference, you can also find these rules in the [Microsoft Sentinel Analytics rule gallery](detect-threats-built-in.md). Use the linked GitHub pages to copy any relevant hunting queries.
+This article lists built-in Microsoft Sentinel content that has been configured to support the Advanced SIEM Information Model (ASIM). While links to the Microsoft Sentinel GitHub repository are provided below as a reference, you can also find these rules in the [Microsoft Sentinel Analytics rule gallery](detect-threats-built-in.md). Use the linked GitHub pages to copy any relevant hunting queries.
+
+To understand how normalized content fits within the ASIM architecture, refer to the [ASIM architecture diagram](normalization.md#asim-components).
> [!TIP] > Also watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM). For more information, see [Next steps](#next-steps).
The following built-in file activity content is supported for ASIM normalization
- [SUNSPOT log file creation ](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/SUNSPOTLogFile.yaml) - [Known ZINC Comebacker and Klackring malware hashes](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ZincJan272021IOCs.yaml)
-## Process Activity security content
+## Network session security content
+
+The following built-in network session related content is supported for ASIM normalization.
+
+### Analytics rules
+
+- [Log4j vulnerability exploit aka Log4Shell IP IOC](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Log4J_IPIOC_Dec112021.yaml)
+- [Excessive number of failed connections from a single source (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/ExcessiveDenyFromSource.yaml)
+- [Potential beaconing activity (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/PossibleBeaconingActivity.yaml)
+- [User agent search for log4j exploitation attempt](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/UserAgentSearch_log4j.yaml)
+
+### Hunting queries
+
+- [Connection from external IP to OMI related Ports](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/NetworkConnectiontoOMIPorts.yaml)
+
+## Workbooks
+
+- Threat Intelligence Workbook
++
+## Process activity security content
The following built-in process activity content is supported for ASIM normalization.
The following built-in process activity content is supported for ASIM normalizat
- [Uncommon processes - bottom 5% (Normalized Process Events)](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/ASimProcess/imProcess_uncommon_processes.yaml) - [Unicode Obfuscation in Command Line](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/UnicodeObfuscationInCommandLine.yaml)
-## Registry Activity security content
+## Registry activity security content
The following built-in registry activity content is supported for ASIM normalization.
The following built-in registry activity content is supported for ASIM normaliza
- [Persisting Via IFEO Registry Key](https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/MultipleDataSources/PersistViaIFEORegistryKey.yaml)
-## <a name="modify"></a>Modify your content to use normalized data
+## Web session security content
-To enable your custom content to use normalization:
+The following built-in web session related content is supported for ASIM normalization.
-- Modify your queries to use the source-agnostic parsers relevant to the query.-- Modify field names in your query to use the normalized schema field names.-- When applicable, change conditions to use the normalized values of the fields in your query.-
-For example, consider the **Rare client observed with high reverse DNS lookup count** DNS analytic rule, which works on DNS events send by Infoblox DNS servers:
-
-```kusto
-let threshold = 200;
-InfobloxNIOS
-| where ProcessName =~ "named" and Log_Type =~ "client"
-| where isnotempty(ResponseCode)
-| where ResponseCode =~ "NXDOMAIN"
-| summarize count() by Client_IP, bin(TimeGenerated,15m)
-| where count_ > threshold
-| join kind=inner (InfobloxNIOS
- | where ProcessName =~ "named" and Log_Type =~ "client"
- | where isnotempty(ResponseCode)
- | where ResponseCode =~ "NXDOMAIN"
- ) on Client_IP
-| extend timestamp = TimeGenerated, IPCustomEntity = Client_IP
-```
-
-The following code is the source-agnostic version, which uses normalization to provide the same detection for any source providing DNS query events:
-
-```kusto
-imDns(responsecodename='NXDOMAIN')
-| summarize count() by SrcIpAddr, bin(TimeGenerated,15m)
-| where count_ > threshold
-| join kind=inner (imDns(responsecodename='NXDOMAIN')) on SrcIpAddr
-| extend timestamp = TimeGenerated, IPCustomEntity = SrcIpAddr```
-```
-
-The normalized, source-agnostic version has the following differences:
--- The `imDns`normalized parser is used instead of the Infoblox Parser.--- `imDns` fetches only DNS query events, so there is no need for checking the event type, as performed by the `where ProcessName =~ "named" and Log_Type =~ "client"` in the Infoblox version.--- The `SrcIpAddr` field is used instead of `Client_IP`.
-
-- Parser parameter filtering is used for ResponseCodeName, eliminating the need for explicit where clauses.--
-Apart from supporting any normalized DNS source, the normalized version is shorter and easier to understand.
-
-If the schema or parsers do not support filtering parameters, the changes are similar, excluding the last one. Instead the filtering conditions are kept from the original query as seen below:
+### Analytics rules
-```kusto
-let threshold = 200;
-imDns
-| where isnotempty(ResponseCodeName)
-| where ResponseCodeName =~ "NXDOMAIN"
-| summarize count() by SrcIpAddr, bin(TimeGenerated,15m)
-| where count_ > threshold
-| join kind=inner (imDns
- | where isnotempty(ResponseCodeName)
- | where ResponseCodeName =~ "NXDOMAIN"
- ) on SrcIpAddr
-| extend timestamp = TimeGenerated, IPCustomEntity = SrcIpAddr
-```
+- [Potential communication with a Domain Generation Algorithm (DGA) based hostname (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/PossibleDGAContacts.yaml)
+- [A client made a web request to a potentially harmful file (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/PotentiallyHarmfulFileTypes.yaml)
+- [A host is potentially running a crypto miner (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/UnusualUACryptoMiners.yaml)
+- [A host is potentially running a hacking tool (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/UnusualUAHackTool.yaml)
+- [A host is potentially running PowerShell to send HTTP(S) requests (ASIM Web Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimWebSession/UnusualUAPowershell.yaml)
## <a name="next-steps"></a>Next steps
This article discusses the Advanced SIEM Information Model (ASIM) content.
For more information, see: - Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)-- [Advanced SIEM Information Model overview](normalization.md)-- [Advanced SIEM Information Model schemas](normalization-about-schemas.md)-- [Advanced SIEM Information Model parsers](normalization-about-parsers.md)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-about-parsers.md)
+- [Modifying Microsoft Sentinel content to use the Advanced SIEM Information Model (ASIM) parsers](normalization-modify-content.md)
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-develop-parsers.md
+
+ Title: Develop Microsoft Sentinel Advanced SIEM Information Model (ASIM) parsers | Microsoft Docs
+description: This article explains how to develop, test, and deploy Microsoft Sentinel Advanced SIEM Information Model (ASIM) parsers.
++ Last updated : 11/09/2021+
+
+
+# Develop Advanced SIEM Information Model (ASIM) parsers (Public preview)
++
+Advanced SIEM Information Model (ASIM) users use *unifying parsers* instead of table names in their queries, to view data in a normalized format and to include all data relevant to the schema in the query. Unifying parsers, in turn, use *source-specific parsers* to handle the specific details of each source.
+
+Microsoft Sentinel provides built-in, source-specific parsers for many data sources. You may want to modify, or *develop*, these source-specific parsers in the following situations:
+
+- When your device provides events that fit an ASIM schema, but a source-specific parser for your device and the relevant schema is not available in Microsoft Sentinel.
+
+- When ASIM source-specific parsers are available for your device, but your device sends events in a method or a format different than expected by the ASIM parsers. For example:
+
+ - Your source device may be configured to send events in a non-standard way.
+
+ - Your device may have a different version than the one supported by the ASIM parser.
+
+ - The events might be collected, modified, and forwarded by an intermediary system.
+
+To understand how parsers fit within the ASIM architecture, refer to the [ASIM architecture diagram](normalization.md#asim-components).
+
+> [!IMPORTANT]
+> ASIM is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Custom parser development process
+
+The following workflow describe the high level steps in developing a custom ASIM, source-specific parser:
+
+1. Identify the schemas or schemas that the events sent from the source represent. For more information, see [Schema overview](normalization-about-schemas.md).
+
+1. [Develop](#developing-parsers) one or more ASIM parsers for your source. You'll need to develop a parser for each schema relevant to the source.
+
+1. [Test](#test-parsers) your parser.
+
+1. [Deploy](#deploy-parsers) the parsers into your Microsoft Sentinel workspaces.
+
+1. Update the relevant ASIM unifying parser to reference the new custom parser. For more information, see [Managing ASIM parsers](normalization-manage-parsers.md).
+
+This article guides you through the process's development, testing, and deployment steps.
+
+> [!TIP]
+> Also watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the related [slide deck](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM). For more information, see [Next steps](#next-steps).
+>
++
+## Developing parsers
+
+A custom parser is a KQL query developed in the Microsoft Sentinel **Logs** page. The parser query has three parts:
+
+**Filter** > **Parse** > **Prepare fields**
+
+### Prerequisites
+
+To develop a custom ASIM parser, you must have access to a workspace that stores relevant events.
+
+> [!TIP]
+> Start a new custom parser using an existing parser for the same schema. Using an existing parser is especially important for filtering parsers to make sure they accept all the parameters required by the schema.
+>
++
+### Filtering
+
+#### Filtering the relevant records
+
+In many cases, a table in Microsoft Sentinel includes multiple types of events. For example:
+* The Syslog table has data from multiple sources.
+* Custom tables may include information from a single source that provides more than one event type and can fit various schemas.
+
+Therefore, a parser should first filter only the records relevant to the target schema.
+
+Filtering in KQL is done using the `where` operator. For example, **Sysmon event 1** reports process creation, and is therefore normalized to the **ProcessEvent** schema. The **Sysmon event 1** event is part of the `Event` table, so you would use the following filter:
+
+```kusto
+Event | where Source == "Microsoft-Windows-Sysmon" and EventID == 1
+```
+
+#### Filtering based on parser parameters
+
+When developing [filtering parsers](normalization-about-parsers.md#optimized-parsers), make sure that your parser accepts the filtering parameters for the relevant schema, as documented in the reference article for that schema. Using an existing parser as a starting point ensures that your parser includes the correct function signature. In most cases, the actual filtering code is also similar for filtering parsers for the same schema.
+
+When filtering, make sure that you:
+
+- **Filter before parsing using physical fields**. If the filtered results are not accurate enough, repeat the test after parsing to fine-tune your results. For more information, see [filtering optimization](#optimization).
+ - **Do not filter if the parameter is not defined and still has the default value**.
+
+The following examples show how to implement filtering for a string parameter, where the default value is usually '\*', and for a list parameter, where the default value is usually an empty list.
+
+``` kusto
+srcipaddr=='*' or ClientIP==srcipaddr
+array_length(domain_has_any) == 0 or Name has_any (domain_has_any)
+```
+
+#### <a name="optimization"></a>Filtering optimization
++
+To ensure the performance of the parser, note the following filtering recommendations:
+
+- **Always filter on built-in rather than parsed fields**. While it is sometimes easier to filter using parsed fields, it dramatically impacts performance.
+- **Use operators that provide optimized performance**. In particular, `==`, `has`, and `startswith`. Using operators such as `contains` or `matches regex` also dramatically impacts performance.
+
+Filtering recommendations for performance may not always be easy to follow. For example, using `has` is less accurate than `contains`. In other cases, matching the built-in field, such as `SyslogMessage`, is less accurate than comparing an extracted field, such as `DvcAction`. In such cases, we recommend that you still pre-filter using a performance-optimizing operator over a built-in field and repeat the filter using more accurate conditions after parsing.
+
+For an example, see the following [Infoblox DNS](https://aka.ms/AzSentinelInfobloxParser) parser snippet. The parser first checks that the SyslogMessage field `has` the word `client`. However, the term might be used in a different place in the message, so after parsing the `Log_Type` field, the parser checks again that the word `client` was indeed the field's value.
+
+```kusto
+Syslog | where ProcessName == "named" and SyslogMessage has "client"
+…
+ | extend Log_Type = tostring(Parser[1]),
+ | where Log_Type == "client"
+```
+
+> [!NOTE]
+> Parsers should not filter by time, as the query using the parser already filters for time.
+>
+
+### Parsing
+
+Once the query selects the relevant records, it may need to parse them. Typically, parsing is needed if multiple event fields are conveyed in a single text field.
+
+The KQL operators that perform parsing are listed below, ordered by their performance optimization. The first provides the most optimized performance, while the last provides the least optimized performance.
+
+|Operator |Description |
+|||
+|[split](/azure/data-explorer/kusto/query/splitfunction) | Parse a string of delimited values. |
+|[parse_csv](/azure/data-explorer/kusto/query/parsecsvfunction) | Parse a string of values formatted as a CSV (comma-separated values) line. |
+|[parse](/azure/data-explorer/kusto/query/parseoperator) | Parse multiple values from an arbitrary string using a pattern, which can be a simplified pattern with better performance, or a regular expression. |
+|[extract_all](/azure/data-explorer/kusto/query/extractallfunction) | Parse single values from an arbitrary string using a regular expression. `extract_all` has a similar performance to `parse` if the latter uses a regular expression. |
+|[extract](/azure/data-explorer/kusto/query/extractfunction) | Extract a single value from an arbitrary string using a regular expression. <br><br>Using `extract` provides better performance than `parse` or `extract_all` if a single value is needed. However, using multiple activations of `extract` over the same source string is less efficient than a single `parse` or `extract_all` and should be avoided. |
+|[parse_json](/azure/data-explorer/kusto/query/parsejsonfunction) | Parse the values in a string formatted as JSON. If only a few values are needed from the JSON, using `parse`, `extract`, or `extract_all` provides better performance. |
+|[parse_xml](/azure/data-explorer/kusto/query/parse-xmlfunction) | Parse the values in a string formatted as XML. If only a few values are needed from the XML, using `parse`, `extract`, or `extract_all` provides better performance. |
+| | |
+
+In addition to parsing string, the parsing phase may require more processing of the original values, including:
+
+- **Formatting and type conversion**. The source field, once extracted, may need to be formatted to fit the target schema field. For example, you may need to convert a string representing date and time to a datetime field. Functions such as `todatetime` and `tohex` are helpful in these cases.
+
+- **Value lookup**. The value of the source field, once extracted, may need to be mapped to the set of values specified for the target schema field. For example, some sources report numeric DNS response codes, while the schema mandates the more common text response codes. The functions `iff` and `case` can be helpful to map a few values.
+
+ For example, the Microsoft DNS parser assigns the `EventResult` field based on the Event ID and Response Code using an `iff` statement, as follows:
+
+ ```kusto
+ extend EventResult = iff(EventId==257 and ResponseCode==0 ,'Success','Failure')
+ ```
+
+ For several values, use `datatable` and `lookup`, as demonstrated in the same DNS parser:
+
+ ```kusto
+ let RCodeTable = datatable(ResponseCode:int,ResponseCodeName:string) [ 0, 'NOERROR', 1, 'FORMERR'....];
+ ...
+ | lookup RCodeTable on ResponseCode
+ | extend EventResultDetails = case (
+ isnotempty(ResponseCodeName), ResponseCodeName,
+ ResponseCode between (3841 .. 4095), 'Reserved for Private Use',
+ 'Unassigned')
+ ```
+
+> [!NOTE]
+> The transformation does not allow using only `lookup`, as multiple values are mapped to `Reserved for Private Use` or `Unassigned`, and therefore the query uses both lookup and case.
+> Even so, the query is still much more efficient than using `case` for all values.
+>
+
+### Prepare fields in the result set
+
+The parser must prepare the fields in the results set to ensure that the normalized fields are used.
+
+>[!NOTE]
+> We recommend that you do not remove any of the original fields that are not normalized from the result set, unless there is a compelling reason to do so, such as if they create confusion.
+>
+
+The following KQL operators are used to prepare fields in your results set:
+
+|Operator | Description | When to use in a parser |
+||||
+|**extend** | Creates calculated fields and adds them to the record. | `Extend` is used if the normalized fields are parsed or transformed from the original data. <br><br>For more information, see the example in the [Parsing](#parsing) section above. |
+|**project-rename** | Renames fields. | If a field exists in the actual event and only needs to be renamed, use `project-rename`. <br><br>The renamed field still behaves like a built-in field, and operations on the field have much better performance. |
+|**project-away** | Removes fields. |Use `project-away` for specific fields that you want to remove from the result set. |
+|**project** | Selects fields that existed before, or were created as part of the statement, and removes all other fields. | Not recommended for use in a parser, as the parser should not remove any other fields that are not normalized. <br><br>If you need to remove specific fields, such as temporary values used during parsing, use `project-away` to remove them from the results. |
+| | | |
+
+### Handle parsing variants
+
+In many cases, events in an event stream include variants that require different parsing logic.
+
+It is often tempting to build a parser from different subparsers, each handling another event variant that needs different parsing logic. Those subparsers, each a query by itself, are then unified using the `union` operator. This approach, while convenient, is *not* recommended as it significantly impacts the performance of the parser.
+
+When handling variants, use the following guidelines:
+
+|Scenario |Handling |
+|||
+|The different variants represent *different* event types, commonly mapped to different schemas | Use separate parsers. |
+|The different variants represent the *same* event type but are structured differently. | If the variants are known, such as when there is a method to differentiate between the events before parsing, use the `case` operator to select the correct `extract_all` to run and field mapping. <br><br>Example: [Infoblox DNS parser](https://aka.ms/AzSentinelInfobloxParser) |
+|`union` is unavoidable | When you must use `union`, make sure to use the following guidelines:<br><br>- Pre-filter using built-in fields in each one of the subqueries. <br>- Ensure that the filters are mutually exclusive. <br>- Consider not parsing less critical information, reducing the number of subqueries. |
+| | |
++
+## Deploy parsers
+
+Deploy parsers manually by copying them to the Azure Monitor Log page and saving your change. This method is useful for testing. For more information, see [Create a function](../azure-monitor/logs/functions.md).
+
+To deploy a large number of parsers, we recommend using parser ARM templates, as follows:
+
+1. Create a YAML file based on the relevant template for each schema and include your query in it. Start with the [YAML template](https://aka.ms/ASimYamlTemplates) relevant for your schema and parser type, filtering or parameter-less.
+
+1. Use the [ASIM Yaml to ARM template converter](https://aka.ms/ASimYaml2ARM) to convert your YAML file to an ARM template.
+
+1. Deploy your template using the [Azure portal](/azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal#edit-and-deploy-the-template) or [PowerShell](/azure/azure-resource-manager/templates/deploy-powershell).
+
+You can also combine multiple templates to a single deploy process using [linked templates](/azure/azure-resource-manager/templates/linked-templates?tabs=azure-powershell#linked-template)
+
+> [!TIP]
+> ARM templates can combine different resources, so parsers can be deployed alongside connectors, analytic rules, or watchlists, to name a few useful options. For example, your parser can reference a watchlist deployed alongside it.
+>
+
+## Test parsers
+
+### Mandatory tests
+
+The following tests are mandatory. A parser that fails will prevent queries using the schema unifying parsers it is part of from working correctly:
+
+- Make sure that the parser produces all mandatory fields.
+
+- Make sure that all normalized fields have the correct type.
+
+- Make sure that fields with logical types are populated only with permitted values. For example, an IP address field is always populated with a valid IP address, and that an enumerated field always gets permitted values.
+
+The ASIM parser testing tool tests for mandatory fields and correct field types.
+
+### Optional tests
+
+Check that fields are populated:
+- A field that is rarely or never populated may indicate incorrect parsing.
+- A field that is usually populated but not always may indicate less common variants of the event are not parsed correctly.
+
+You can use the following query to test how sparsely populated each field is.
+
+```KQL
+<parser name>
+| where TimeGenerated > ago(<time period>)
+| project p = pack_all()
+| mv-expand f = p
+| project f
+| extend key = tostring(bag_keys(f)[0])
+| summarize total=count(), empty=countif(strlen(f[key]) == 0) by key
+| extend sparseness = todouble(empty)/todouble(total)
+| sort by sparseness desc
+```
+
+Set the time period to the longest that performance will allow.
+
+### Using the ASIM parser testing tool
+
+Test the parser using the ASIM parser testing tool to find missing mandatory or recommended fields and fields with an incorrect type:
+
+1. [Deploy the ASIM testing tool]() to a Microsoft Sentinel workspace where your parser is deployed and works.
+
+1. Run the following query in the Microsoft Sentinel **Logs** page:
+
+ ```KQL
+ <parser name> | getschema | invoke ASimSchemaTester('<schema>')
+ ```
+
+Handle the results as follows:
+
+| Message | Action |
+| - | |
+| **(0) Error: Missing mandatory field [\<Field\>]** | Add this field to your parser. In many cases, this would be a derived value or a constant value, and not a field already available from the source. |
+| **(0) Error: type mismatch for field [\<Field\>]. It is currently [\<Type\>] and should be [\<Type\>]** | Make sure that the type of normalized field is correct, usually by using a [conversion function](/azure/data-explorer/kusto/query/scalarfunctions#conversion-functions) such as `tostring`. |
+| **(1) Warning: Missing recommended field [\<Field\>]** | Consider adding this field to your parser. |
+| **(1) Warning: Missing alias [\<Field\>]** | Check if the field the alias refers to exists and if so, add the alias. |
+| **(2) Info: Missing optional field [\<Field\>]** | While optional fields are often missing, it is worth reviewing the list to determine if any of the optional fields can be mapped from the source. |
+| **(2) Info: extra unnormalized field [\<Field\>]** | While unnormalized fields are valid, it is worth reviewing the list to determine if any of the unnormalized values can be mapped to an optional field. |
+|||
++
+## <a name="next-steps"></a>Next steps
+
+This article discusses developing ASIM parsers.
+
+Learn more about ASIM parsers:
+
+- [ASIM parsers overview](normalization-parsers-overview.md)
+- [Use ASIM parsers](normalization-about-parsers.md)
+- [Manage ASIM parsers](normalization-manage-parsers.md)
+
+Learn more about the ASIM in general:
+
+- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Normalization Manage Parsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-manage-parsers.md
+
+ Title: Manage Advanced SIEM Information Model (ASIM) parsers | Microsoft Docs
+description: This article explains how to manage Advanced SIEM Information Model (ASIM) parsers, add a customer parser, and replace a built-in parser.
++ Last updated : 11/09/2021+
+
+
+# Manage Advanced SIEM Information Model (ASIM) parsers (Public preview)
++
+Advanced SIEM Information Model (ASIM) users use *unifying parsers* instead of table names in their queries, to view data in a normalized format and get all the data relevant to the schema in a single query. Each unifying parser uses multiple source-specific parsers that handle each source's specific details.
+
+To understand how parsers fit within the ASIM architecture, refer to the [ASIM architecture diagram](normalization.md#asim-components).
+
+You may need to manage the source-specific parsers used by each unifying parser to:
+
+- **Add a custom, source-specific parser** to a unifying parser.
+
+- **Replace a built-in, source-specific parser** that's used by a unifying parser with a custom, source-specific parser. Replace built-in parsers when you want to:
+
+ - Use a version of the built-in parser other than the one used by default in the unifying parser.
+
+ - Prevent automated updates by preserving the version of the source-specific parser used by the unifying parser.
+
+ - Use a modified version of a built-in parser.
+
+This article guides you through managing your parsers, whether using built-in, unifying ASIM parsers or workspace-deployed unifying parsers.
+
+> [!IMPORTANT]
+> ASIM is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Prerequisites
+
+The procedures in this article assume that all source-specific parsers have already been deployed to your Microsoft Sentinel workspace.
+
+For more information, see [Develop ASIM parsers](normalization-develop-parsers.md#deploy-parsers).
+
+## Manage built-in unifying parsers
+
+### Set up your workspace
+
+Microsoft Sentinel users cannot edit built-in unifying parsers. Instead, use the following mechanisms to modify the behavior of built-in unifying parsers:
+
+- **To support adding source-specific parsers**, ASIM uses unifying, custom parsers. These custom parsers are workspace-deployed, and therefore editable. Built-in, unifying parsers automatically pick up these custom parsers, if they exist.
+
+ You can deploy initial, empty, unifying custom parsers to your Microsoft Sentinel workspace for all supported schemas, or individually for specific schemas. For more information, see [Deploy initial ASIM empty custom unifying parsers](https://aka.ms/ASimDeployEmptyCustomUnifyingParsers) in the Microsoft Sentinel GitHub repository.
+
+- **To support excluding built-in source-specific parsers**, ASIM uses a watchlist. Deploy the watchlist to your Microsoft Sentinel workspace from the Microsoft Sentinel [GitHub](https://aka.ms/DeployASimExceptionWatchlist) repository.
+
+### Add a custom parser to a built-in unifying parser
+
+To add a custom parser, insert a line to the custom unifying parser to reference the new, custom parser.
+
+Make sure to add both a filtering custom parser and a parameter-less custom parser. To learn more about how to edit parsers, refer to the document [Functions in Azure Monitor log queries](/azure/azure-monitor/logs/functions#edit-a-function).
+
+The syntax of the line to add is different for each schema:
+
+| Schema | Filtering parser | Parameter&#8209;less&nbsp;parser |
+| | - | |
+| DNS | **Name**: `Im_DnsCustom`<br><br> **Line to add**:<br> `_parser_name_ (starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype)` | **Name**: `ASim_DnsCustom`<br><br> **Line to add**:<br> `_parser_name_` |
+| | |
+
+When adding an additional parser to a unifying custom parser that already references parsers, make sure you add a comma at the end of the previous line.
+
+For example, the following code shows a custom unifying parser after having added the `added_parser`:
+
+```KQL
+union isfuzzy=true
+existing_parser(starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype),
+added_parser(starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype)
+```
+
+### Use a modified version of a built-in parser
+
+To modify an existing, built-in source-specific parser:
+
+1. Create a custom parser based on the original parser and [add it](#add-a-custom-parser-to-a-built-in-unifying-parser) to the built-in parser.
+
+1. Add a record to the `ASim Disabled Parsers` watchlist.
+
+1. Define the `CallerContext` value with the names of any unifying parsers you want to exclude the parser from.
+
+1. Define the `SourceSpecificParser` value with the name of the parser you want to exclude, without a version specifier.
+
+For example, to exclude the Azure Firewall DNS parser, add the following records to the watchlist:
+
+| CallerContext | SourceSpecificParser |
+| - | - |
+| `_Im_Dns` | `_Im_Dns_AzureFirewall` |
+| `_ASim_Dns` | `_ASim_Dns_AzureFirewall` |
+| | |
+
+### Prevent an automated update of a built-in parser
+
+Use the following process to prevent automatic updates for built-in, source-specific parsers:
+
+1. Add the built-in parser version you want to use, such as `_Im_Dns_AzureFirewallV02`, to the custom unifying parser. For more information, see above, [Add a custom parser to a built-in unifying parser](#add-a-custom-parser-to-a-built-in-unifying-parser).
+
+1. Add an exception for the built-in parser. For example, when you want to entirely opt out from automatic updates, and therefore exclude a large number of built-in parsers, add:
+
+ - A record with `Any` as the `SourceSpecificParser` field, to exclude all parsers for the `CallerContext`.
+ - A record for `Any` in the CallerContext and the `SourceSpecificParser` fields to exclude all built-in parsers.
+
+ For more information, see [Use a modified version of a built-in parser](#use-a-modified-version-of-a-built-in-parser).
+
+## Manage workspace-deployed unifying parsers
+
+### Add a custom parser to a workspace-deployed unifying parser
+
+To add a custom parser, insert a line to the `union` statement in the workspace-deployed unifying parser that references the new custom parser.
+
+Make sure to add both a filtering custom parser and a parameter-less custom parser. The syntax of the line to add is different for each schema:
+
+| Schema | Filtering parser | Parameter&#8209;less&nbsp;parser |
+| | -- | |
+| **Authentication** | **Name:** `ImAuthentication`<br><br>**Line to add:**<br> `_parser_name_ (starttime, endtime, targetusername_has)` | **Name:** `ASimAuthentication`<br><br> **Line to add:** `_parser_name_` |
+| **DNS** | **Name:** `ImDns`<br><br>**Line to add:**<br> `_parser_name_ (starttime, endtime, srcipaddr, domain_has_any,`<br>` responsecodename, response_has_ipv4, response_has_any_prefix,`<br>` eventtype)` | **Name:** `ASimDns`<br><br>**Line to add:** `_parser_name_` |
+| **File Event** | | **Name:** `imFileEvent`<br><br>**Line to add:** `_parser_name_` |
+| **Network Session** | **Name:** `imNetworkSession`<br><br>**Line to add:**<br> `_parser_name_ (starttime, endtime, srcipaddr_has_any_prefix, dstipaddr_has_any_prefix, dstportnumber, url_has_any,`<br>` httpuseragent_has_any, hostname_has_any, dvcaction, eventresult)` | **Name:** `ASimNetworkSession`<br><br>**Line to add:** `_parser_name_` |
+| **Process Event** | | **Names:**<br> - `imProcess`<br> - `imProcessCreate`<br> - `imProcessTerminate`<br><br>**Line to add:** `_parser_name_` |
+| **Registry Event** | | **Name:** `imRegistry`<br><br>**Line to add:** `_parser_name_` |
+| **Web Session** | **Name:** `imWebSession`<br><br>**Line to add:**<br> `_parser_name_ parser (starttime, endtime, srcipaddr_has_any, url_has_any, httpuseragent_has_any, eventresultdetails_in, eventresult)` | **Name:** `ASimWebSession`<br><br>**Line to add:** `_parser_name_` |
+| | |
+
+When adding an additional parser to a unifying parser, make sure you add a comma at the end of the previous line.
+
+For example, the following example shows the DNS filtering unifying parser, after having added the custom `added_parser`:
+
+```KQL
+ let Generic=(starttime:datetime=datetime(null), endtime:datetime=datetime(null) , srcipaddr:string='*' , domain_has_any:dynamic=dynamic([]) , responsecodename:string='*', response_has_ipv4:string='*' , response_has_any_prefix:dynamic=dynamic([]) , eventtype:string='lookup' ){
+ let DisabledParsers=materialize(_GetWatchlist('ASimDisabledParsers') | where SearchKey in ('Any', 'imDns') | extend SourceSpecificParser=column_ifexists('SourceSpecificParser','') | distinct SourceSpecificParser);
+ let imDnsBuiltInDisabled=toscalar('imDnsBuiltIn' in (DisabledParsers) or 'Any' in (DisabledParsers));
+ union isfuzzy=true
+ vimDnsEmpty
+ , vimDnsCiscoUmbrella ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype, (imDnsBuiltInDisabled or('vimDnsCiscoUmbrella' in (DisabledParsers) )))
+ , vimDnsInfobloxNIOS ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype, (imDnsBuiltInDisabled or('vimDnsInfobloxNIOS' in (DisabledParsers) )))
+ ...
+ , vimDnsAzureFirewall ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype, (imDnsBuiltInDisabled or('vimDnsAzureFirewall' in (DisabledParsers) )))
+ , vimDnsMicrosoftNXlog ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype, (imDnsBuiltInDisabled or('vimDnsMicrosoftNXlog' in (DisabledParsers) ))),
+ added_parser ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype)
+ };
+ Generic( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype)
+```
+
+### Use a modified version of a workspace-deployed parser
+
+Microsoft Sentinel users can directly modify workspace-deployed parsers. Create a parser based on the original, comment out the original, and then add your modified version to the workspace-deployed unifying parser.
+
+For example, the following code shows a DNS filtering unifying parser, having replaced the `vimDnsAzureFirewall` parser with a modified version:
+
+```KQL
+ let Generic=(starttime:datetime=datetime(null), endtime:datetime=datetime(null) , srcipaddr:string='*' , domain_has_any:dynamic=dynamic([]) , responsecodename:string='*', response_has_ipv4:string='*' , response_has_any_prefix:dynamic=dynamic([]) , eventtype:string='lookup' ){
+ let DisabledParsers=materialize(_GetWatchlist('ASimDisabledParsers') | where SearchKey in ('Any', 'imDns') | extend SourceSpecificParser=column_ifexists('SourceSpecificParser','') | distinct SourceSpecificParser);
+ let imDnsBuiltInDisabled=toscalar('imDnsBuiltIn' in (DisabledParsers) or 'Any' in (DisabledParsers));
+ union isfuzzy=true
+ vimDnsEmpty
+ , vimDnsCiscoUmbrella ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype, (imDnsBuiltInDisabled or('vimDnsCiscoUmbrella' in (DisabledParsers) )))
+ , vimDnsInfobloxNIOS ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype, (imDnsBuiltInDisabled or('vimDnsInfobloxNIOS' in (DisabledParsers) )))
+ ...
+ // , vimDnsAzureFirewall ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype, (imDnsBuiltInDisabled or('vimDnsAzureFirewall' in (DisabledParsers) )))
+ , vimDnsMicrosoftNXlog ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype, (imDnsBuiltInDisabled or('vimDnsMicrosoftNXlog' in (DisabledParsers) ))),
+ modified_vimDnsAzureFirewall ( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype)
+ };
+ Generic( starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype)
+```
+
+## <a name="next-steps"></a>Next steps
+
+This article discusses managing the Advanced SIEM Information Model (ASIM) parsers.
+
+Learn more about ASIM parsers:
+
+- [ASIM parsers overview](normalization-parsers-overview.md)
+- [Use ASIM parsers](normalization-about-parsers.md)
+- [Develop custom ASIM parsers](normalization-develop-parsers.md)
+
+Learn more about the ASIM in general:
+
+- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Normalization Modify Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-modify-content.md
+
+ Title: Modify content to use the Microsoft Sentinel Advanced SIEM Information Model (ASIM) | Microsoft Docs
+description: This article explains how to convert Microsoft Sentinel content to use the the Advanced SIEM Information Model (ASIM).
++ Last updated : 11/09/2021++++
+# Modify content to use the Advanced SIEM Information Model (ASIM) (Public preview)
++
+Normalized security content in Microsoft Sentinel includes analytics rules, hunting queries, and workbooks that work with unifying normalization parsers.
+
+<a name="builtin"></a>You can find normalized, out-of-the-box content in Microsoft Sentinel galleries and [solutions](sentinel-solutions-catalog.md), create your own normalized content, or modify existing, custom content to use normalized data.
+
+This article explains how to convert existing Microsoft Sentinel analytics rules to use [normalized data](normalization.md) with the Advanced SIEM Information Model (ASIM).
+
+To understand how normalized content fits within the ASIM architecture, refer to the [ASIM architecture diagram](normalization.md#asim-components).
+
+> [!TIP]
+> Also watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM). For more information, see [Next steps](#next-steps).
+>
+
+> [!IMPORTANT]
+> ASIM is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Modify custom content to use normalization
+
+To enable your custom Microsoft Sentinel content to use normalization:
+
+- Modify your queries to use any [unifying parsers](normalization-about-parsers.md) relevant to the query.
+
+- Modify field names in your query to use the [normalized schema](normalization-about-schemas.md) field names.
+
+- When applicable, change conditions to use the normalized values of the fields in your query.
+
+## Sample normalization for analytics rules
+
+For example, consider the **Rare client observed with high reverse DNS lookup count** DNS analytic rule, which works on DNS events send by Infoblox DNS servers:
+
+```kusto
+let threshold = 200;
+InfobloxNIOS
+| where ProcessName =~ "named" and Log_Type =~ "client"
+| where isnotempty(ResponseCode)
+| where ResponseCode =~ "NXDOMAIN"
+| summarize count() by Client_IP, bin(TimeGenerated,15m)
+| where count_ > threshold
+| join kind=inner (InfobloxNIOS
+ | where ProcessName =~ "named" and Log_Type =~ "client"
+ | where isnotempty(ResponseCode)
+ | where ResponseCode =~ "NXDOMAIN"
+ ) on Client_IP
+| extend timestamp = TimeGenerated, IPCustomEntity = Client_IP
+```
+
+The following code is the source-agnostic version, which uses normalization to provide the same detection for any source providing DNS query events. The following example uses built-in ASIM parsers:
+
+```kusto
+_Im_Dns(responsecodename='NXDOMAIN')
+| summarize count() by SrcIpAddr, bin(TimeGenerated,15m)
+| where count_ > threshold
+| join kind=inner (imDns(responsecodename='NXDOMAIN')) on SrcIpAddr
+| extend timestamp = TimeGenerated, IPCustomEntity = SrcIpAddr```
+```
+
+To use workspace-deployed ASIM parsers, replace the first line with the following code:
+
+```kusto
+imDns(responsecodename='NXDOMAIN')
+```
+### Differences between built-in and workspace-deployed parsers
+
+The two options in the example [above](#sample-normalization-for-analytics-rules) are functionally identical. The normalized, source-agnostic version has the following differences:
+
+- The `_In_Dns` or `imDns`normalized parsers are used instead of the Infoblox Parser.
+
+- The normalized parsers fetch only DNS query events, so there is no need for checking the event type, as performed by the `where ProcessName =~ "named" and Log_Type =~ "client"` in the Infoblox version.
+
+- The `SrcIpAddr` field is used instead of `Client_IP`.
+
+- Parser parameter filtering is used for ResponseCodeName, eliminating the need for an explicit `where` clauses.
++
+>[!NOTE]
+> Apart from supporting any normalized DNS source, the normalized version is shorter and easier to understand.
+>
+
+If the schema or parsers do not support filtering parameters, the changes are similar, except that the filtering conditions are kept from the original query. For example:
+
+```kusto
+let threshold = 200;
+imDns
+| where isnotempty(ResponseCodeName)
+| where ResponseCodeName =~ "NXDOMAIN"
+| summarize count() by SrcIpAddr, bin(TimeGenerated,15m)
+| where count_ > threshold
+| join kind=inner (imDns
+ | where isnotempty(ResponseCodeName)
+ | where ResponseCodeName =~ "NXDOMAIN"
+ ) on SrcIpAddr
+| extend timestamp = TimeGenerated, IPCustomEntity = SrcIpAddr
+```
+
+## <a name="next-steps"></a>Next steps
+
+This article discusses the Advanced SIEM Information Model (ASIM) content.
+
+For more information, see:
+
+- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Normalization Parsers Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-parsers-overview.md
+
+ Title: Microsoft Sentinel Advanced SIEM Information Model (ASIM) parsers overview | Microsoft Docs
+description: This article provides an overview of Advanced SIEM Information Model (ASIM) parsers and a link to more detailed ASIM parsers documents.
++ Last updated : 11/09/2021+
+
+
+# The Advanced SIEM Information Model (ASIM) parsers (Public preview)
++
+In Microsoft Sentinel, parsing and [normalizing](normalization.md) happen at query time. Parsers are built as [KQL user-defined functions](/azure/data-explorer/kusto/query/functions/user-defined-functions) that transform data in existing tables, such as **CommonSecurityLog**, custom logs tables, or Syslog, into the normalized schema.
+
+Users [use Advanced SIEM Information Model (ASIM) parsers](normalization-about-parsers.md) instead of table names in their queries to view data in a normalized format, and to include all data relevant to the schema in your query.
+
+To understand how parsers fit within the ASIM architecture, refer to the [ASIM architecture diagram](normalization.md#asim-components).
+
+> [!IMPORTANT]
+> ASIM is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Built-in ASIM parsers and workspace-deployed parsers
+
+Many ASIM parsers are built in and available out-of-the-box in every Microsoft Sentinel workspace. ASIM also supports deploying parsers to specific workspaces [from GitHub](https://aka.ms/DeployASIM), using an ARM template or manually. Both out-of-the-box and workspace-deployed parsers are functionally equivalent, but have slightly different naming conventions, allowing both parser sets to coexist in the same Microsoft Sentinel workspace.
+
+Each method has advantages over the other:
+
+| Compare | Built-in | Workspace-deployed |
+| | | |
+| **Advantages** | Exist in every Microsoft Sentinel instance. <br><br>Usable with other built-in content. | New parsers are often delivered first as workspace-deployed parsers.|
+| **Disadvantages** |Cannot be directly modified by users. <br><br>Fewer parsers available. | Not used by built-in content. |
+| **When to use** | Use in most cases that you need ASIM parsers. | Use when deploying new parsers, or for parsers not yet available out-of-the-box. |
+| | | |
+
+> [!TIP]
+> Using both built-in and workspace-deployed parsers is useful when you want to customize built-in parsers by adding custom, workspace-deployed parsers to the built-in parser hierarchy. For more information, see [Managing ASIM parsers](normalization-manage-parsers.md).
+>
+
+## Parser hierarchy
+
+ASIM includes two levels of parsers: **unifying** parser and **source-specific** parsers. The user usually uses the **unifying** parser for the relevant schema, ensuring all data relevant to the schema is queried. The **unifying** parser in turn calls **source-specific** parsers to perform the actual parsing and normalization, which is specific for each source.
+
+>[!TIP]
+> The built-in parser hierarchy adds a layer to support customization. For more information, see [Managing ASIM parsers](normalization-develop-parsers.md).
+
+## <a name="next-steps"></a>Next steps
+
+Learn more about ASIM parsers:
+
+- [Use ASIM parsers](normalization-about-parsers.md)
+- [Develop custom ASIM parsers](normalization-develop-parsers.md)
+- [Manage ASIM parsers](normalization-manage-parsers.md)
++
+For more about ASIM, in general, see:
+
+- Watch the [Deep Dive Webinar on Microsoft Sentinel Normalizing Parsers and Normalized Content](https://www.youtube.com/watch?v=zaqblyjQW6k) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjGtoRPQ2XYe3wQDz?e=R3dWeM)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Normalization Schema V1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization-schema-v1.md
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-The network normalization schema is used to describe reported network events, and is used by Microsoft Sentinel to enable source-agnostic analytics.
+The network normalization schema is used to describe reported network events, and is used by Microsoft Sentinel to enable unifying analytics.
For more information, see [Normalization and the Advanced SIEM Information Model (ASIM)](normalization.md).
The original version of the Microsoft Sentinel Network session normalization sch
Differences between version 0.1, documented in this article, and [version 0.2](network-normalization-schema.md) include: -- In version 0.2, source-agnostic and source-specific parser names have been changed to conform to a standard ASIM naming convention.-- Version 0.2 adds specific guidelines and source-agnostic parsers to accommodate specific device types.
+- In version 0.2, unifying and source-specific parser names have been changed to conform to a standard ASIM naming convention.
+- Version 0.2 adds specific guidelines and unifying parsers to accommodate specific device types.
The following sections describe how [version 0.2](network-normalization-schema.md) differs for specific fields.
sentinel Normalization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/normalization.md
This article provides an overview of the Advanced Security Information and Event
## Common ASIM usage
-The Advanced SIEM Information Model (ASIM) provides a seamless experience for handling various sources in uniform, normalized views, by providing the following functionality:
+ASIM provides a seamless experience for handling various sources in uniform, normalized views, by providing the following functionality:
- **Cross source detection**. Normalized analytics rules work across sources, on-premises and cloud, and detect attacks such as brute force or impossible travel across systems, including Okta, AWS, and Azure.
The Advanced SIEM Information Model (ASIM) provides a seamless experience for ha
### ASIM and the Open Source Security Events Metadata
-The Advanced SIEM Information Model aligns with the [Open Source Security Events Metadata (OSSEM)](https://ossemproject.com/intro.html) common information model, allowing for predictable entities correlation across normalized tables.
+ASIM aligns with the [Open Source Security Events Metadata (OSSEM)](https://ossemproject.com/intro.html) common information model, allowing for predictable entities correlation across normalized tables.
OSSEM is a community-led project that focuses primarily on the documentation and standardization of security event logs from diverse data sources and operating systems. The project also provides a Common Information Model (CIM) that can be used for data engineers during data normalization procedures to allow security analysts to query and analyze data across diverse data sources.
For more information, see the [OSSEM reference documentation](https://ossemproje
The following image shows how non-normalized data can be translated into normalized content and used in Microsoft Sentinel. For example, you can start with a custom, product-specific, non-normalized table, and use a parser and a normalization schema to convert that table to normalized data. Use your normalized data in both Microsoft and custom analytics, rules, workbooks, queries, and more.
- :::image type="content" source="media/normalization/sentinel-information-model-components.png" alt-text="Non-normalized to normalized data conversion flow and usage in Microsoft Sentinel":::
+ :::image type="content" source="media/normalization/asim-architecture.png" alt-text="Non-normalized to normalized data conversion flow and usage in Microsoft Sentinel":::
-The Advanced SIEM Information Model includes the following components:
+ASIM includes the following components:
|Component |Description | |||
-|**Normalized schemas** | Cover standard sets of predictable event types that you can use when building unified capabilities. <br><br>Each schema defines the fields that represent an event, a normalized column naming convention, and a standard format for the field values. <br><br> ASIM currently defines the following schemas:<br> - [Authentication Event](authentication-normalization-schema.md)<br> - [DHCP Activity](dhcp-normalization-schema.md)<br> - [DNS Activity](dns-normalization-schema.md)<br> - [File Activity](file-event-normalization-schema.md) <br> - [Network Session](./network-normalization-schema.md)<br> - [Process Event](process-events-normalization-schema.md)<br> - [Registry Event](registry-event-normalization-schema.md)<br>- [Web Session](web-normalization-schema.md)<br><br>For more information, see [Advanced SIEM Information Model schemas](normalization-about-schemas.md). |
-|**Parsers** | Map existing data to the normalized schemas using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). <br><br>Deploy the Microsoft-developed normalizing parsers from the [`Parsers` folder in the Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM). <br><br>For more information, see [Advanced SIEM Information Model parsers](normalization-about-parsers.md). |
-|**Content for each normalized schema** | Includes analytics rules, workbooks, hunting queries, and more. Content for each normalized schema works on any normalized data without the need to create source-specific content. <br><br>For more information, see [Advanced SIEM Information Model content](normalization-content.md). |
+|**Normalized schemas** | Cover standard sets of predictable event types that you can use when building unified capabilities. <br><br>Each schema defines the fields that represent an event, a normalized column naming convention, and a standard format for the field values. <br><br> ASIM currently defines the following schemas:<br> - [Authentication Event](authentication-normalization-schema.md)<br> - [DHCP Activity](dhcp-normalization-schema.md)<br> - [DNS Activity](dns-normalization-schema.md)<br> - [File Activity](file-event-normalization-schema.md) <br> - [Network Session](./network-normalization-schema.md)<br> - [Process Event](process-events-normalization-schema.md)<br> - [Registry Event](registry-event-normalization-schema.md)<br>- [Web Session](web-normalization-schema.md)<br><br>For more information, see [ASIM schemas](normalization-about-schemas.md). |
+|**Parsers** | Map existing data to the normalized schemas using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). <br><br>Many ASIM parsers are available out of the box with Microsoft Sentinel. More parsers, and versions of the built-in parsers that can be modified can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelASim). <br><br>For more information, see [ASIM parsers](normalization-about-parsers.md). |
+|**Content for each normalized schema** | Includes analytics rules, workbooks, hunting queries, and more. Content for each normalized schema works on any normalized data without the need to create source-specific content. <br><br>For more information, see [ASIM content](normalization-content.md). |
| | | ### ASIM terminology
-The Advanced SIEM Information Model uses the following terms:
+ASIM uses the following terms:
|Term |Description | |||
The Advanced SIEM Information Model uses the following terms:
To start using ASIM:
-1. Deploy all ASIM parsers quickly from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM).
+- Activate analytics rule templates that use ASIM. For more information, see the [ASIM content list](normalization-content.md#builtin).
-1. Activate analytics rule templates that use ASIM. For more information, see the [Advanced SIEM Information Model (ASIM) content list](normalization-content.md#builtin).
+- Use the ASIM hunting queries from the Microsoft Sentinel GitHub repository, when querying logs in KQL in the Microsoft Sentinel **Logs** page. For more information, see the [ASIM content list](normalization-content.md#builtin).
-1. Use ASIM in your workspace, using the following methods:
+- Write your own analytics rules using ASIM or [convert existing ones](normalization-content.md#builtin).
- - Use the ASIM hunting queries from the Microsoft Sentinel GitHub repository, when querying logs in KQL in the Microsoft Sentinel **Logs** page. For more information, see the [Advanced SIEM Information Model (ASIM) content list](normalization-content.md#builtin).
-
- - Write your own analytics rules using ASIM or [convert existing ones](normalization-content.md#builtin).
-
- - Enable your custom data to use built-in analytics by [writing parsers](normalization-about-parsers.md) for your custom sources and [adding](normalization-about-parsers.md#include) them to the relevant source agnostic parser.
+- Enable your custom data to use built-in analytics by [writing parsers](normalization-develop-parsers.md) for your custom sources and [adding](normalization-manage-parsers.md) them to the relevant source agnostic parser.
## <a name="next-steps"></a>Next steps
-This article provides an overview of normalization in Microsoft Sentinel and the Advanced SIEM Information Model.
+This article provides an overview of normalization in Microsoft Sentinel and ASIM.
For more information, see: - Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)-- [Advanced SIEM Information Model schemas](normalization-about-schemas.md)-- [Advanced SIEM Information Model parsers](normalization-about-parsers.md)-- [Advanced SIEM Information Model content](normalization-content.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Process Events Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/process-events-normalization-schema.md
Microsoft Sentinel provides the following built-in, product-specific process eve
- **Sysmon process termination (Event 5)**, collected using the Log Analytics Agent or Azure Monitor Agent - **Microsoft 365 Defender for Endpoint process creation**
-To use the source-agnostic parsers that unify all of listed parsers and ensure that you analyze across all the configured sources, use the following table names in your queries:
+To use the unifying parsers that unify all of listed parsers and ensure that you analyze across all the configured sources, use the following table names in your queries:
- **imProcessCreate**, for queries that require process creation information. These queries are the most common case. - **imProcessTerminate** for queries that require process termination information. - **imProcessEvents** for queries that require both process creation and termination information. In such cases, the `EventType` field enables you to distinguish between the events, and is set to `ProcessCreate` or `ProcessTerminate`, respectively. Process termination events generally include a lot less information than process creation events.
-Deploy the [source-agnostic and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelProcessEvents).
+Deploy the [unifying and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelProcessEvents).
+
+For more information, see [ASIM parsers overview](normalization-parsers-overview.md).
## Add your own normalized parsers When implementing custom parsers for the [Process Event](normalization-about-schemas.md#the-process-entity) information model, name your KQL functions using the following syntax: `imProcess<Type><vendor><Product>`, where `Type` is either `Create`, `Terminate`, or `Event` if the parser implements both creation and termination events.
-Add your KQL function to the `imProcess<Type>` and `imProcess` source-agnostic parsers to ensure that any content using the [Process Event](normalization-about-schemas.md#the-process-entity) model also uses your new parser.
+Add your KQL function to the `imProcess<Type>` and `imProcess` unifying parsers to ensure that any content using the [Process Event](normalization-about-schemas.md#the-process-entity) model also uses your new parser.
## Normalized content for process activity data
The process event schema references the following entities, which are central to
For more information, see: -- [Normalization in Microsoft Sentinel](normalization.md)-- [Microsoft Sentinel authentication normalization schema reference (Public preview)](authentication-normalization-schema.md)-- [Microsoft Sentinel DNS normalization schema reference](dns-normalization-schema.md)-- [Microsoft Sentinel file event normalization schema reference (Public preview)](file-event-normalization-schema.md)-- [Microsoft Sentinel network normalization schema reference](./network-normalization-schema.md)-- [Microsoft Sentinel registry event normalization schema reference (Public preview)](registry-event-normalization-schema.md)
+- Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Registry Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/registry-event-normalization-schema.md
Microsoft Sentinel provides the following built-in, product-specific registry ev
- **Sysmon registry monitoring events (Events 12, 13, and 14)**, collected using the Log Analytics Agent or Azure Monitor Agent - **Microsoft 365 Defender for Endpoint registry events**
-To use the source-agnostic parser that unifies all of the built-in parsers, and ensure that your analysis runs across all the configured sources, use **imRegistry** as the table name in your query.
+To use the unifying parser that unifies all of the built-in parsers, and ensure that your analysis runs across all the configured sources, use **imRegistry** as the table name in your query.
-Deploy the [source-agnostic and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelRegistry).
+Deploy the [unifying and source-specific parsers](normalization-about-parsers.md) from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelRegistry).
+
+For more information, see [ASIM parsers](normalization-parsers-overview.md) and [Use ASIM parsers](normalization-about-parsers.md).
### Add your own normalized parsers When implementing custom parsers for the Registry Event information model, name your KQL functions using the following syntax: `imRegistry<vendor><Product>`.
-Add your KQL functions to the `imRegistry` source-agnostic parsers to ensure that any content using the Registry Event model also uses your new parser.
+Add your KQL functions to the `imRegistry` unifying parsers to ensure that any content using the Registry Event model also uses your new parser.
## Normalized content
sentinel Security Alert Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/security-alert-schema.md
+
+ Title: Microsoft Sentinel security alert schema reference
+description: This article displays the schema of security alerts in Microsoft Sentinel.
+
+cloud: na
+documentationcenter: na
++ Last updated : 01/11/2022++++
+# Microsoft Sentinel security alert schema reference
++
+Microsoft Sentinel [analytics rules](detect-threats-built-in.md) create incidents as the result of **security alerts**. Security alerts can come from different sources, and accordingly use different kinds of analytics rules to create incidents:
+
+- **Scheduled** analytics rules generate alerts as the result of their regular queries of data in logs ingested from external sources, and those same rules create incidents from those alerts. (For the purposes of this document, "scheduled" rule alerts include **NRT rule alerts**.)
+
+- **Microsoft Security** analytics rules create incidents from alerts that are ingested as-is from other Microsoft security products, for example, Microsoft 365 Defender and Microsoft Defender for Cloud.
+
+Regardless of the source, these alerts are all stored together in the *SecurityAlert* table in your Log Analytics workspace. This article describes the schema of this table.
+
+Because alerts come from many sources, not all fields are used by all providers. Some fields may be left blank.
+
+## Schema definitions
+
+| Column Name | Type | Description |
+| | | |
+| **AlertLink** | string | A link to the alert in the portal of the originating product. |
+| **AlertName** | string | The display name of the alert. <ul><li>**Scheduled rule alerts:** taken from the rule name.<li>**Ingested alerts:** the display name of the alert in the originating product. |
+| **AlertSeverity** | string | The severity of the alert. [Informational / Low / Medium / High] |
+| **AlertType** | string | The type of alert. <ul><li>**Scheduled rule alerts:** taken from the rule ID.<li>**Ingested alerts:** some products group their alerts by type. In some cases, may be identical to or synonymous with the product name. |
+| **CompromisedEntity** | string | The display name of the main entity being alerted on. |
+| **ConfidenceLevel** | string | The confidence level of this alert: how sure the provider is that this is not a false positive. |
+| **ConfidenceScore** | real | The confidence score of the alert, on a scale of 0.0-1.0, if applicable. This property allows for a more fine-grained representation of the confidence level of the alert compared to the ConfidenceLevel field. |
+| **Description** | string | The description of the alert. |
+| **DisplayName** | string | The display name of the alert. Synonymous with *AlertName* but retained for compatibility. |
+| **EndTime** | datetime | The end time of the impact of the alert. <ul><li>**Scheduled rule alerts:** the value of the *TimeGenerated* field for the last *event* captured by the query.<li>**Ingested alerts:** the time of the last event or activity included in the alert. |
+| **Entities** | string | A list of the entities identified in the alert. This list can include a combination of entities of different types. The entities' types can be any of those defined in the schema, as described in the [entities documentation](entities-reference.md). |
+| **ExtendedLinks** | string | A bag (a collection) for all links related to the alert. This bag can include a combination of links of different types. |
+| **ExtendedProperties** | string | A collection of other properties of the alert, including user-defined properties. Any [custom details](surface-custom-details-in-alerts.md) defined in the alert, and any dynamic content in the [alert details](customize-alert-details.md), are stored here. |
+| **IsIncident** | boolean | DEPRECATED. Always set to *false*. |
+| **ProcessingEndTime** | datetime | The time of the alert's publishing. <ul><li>**Scheduled rule alerts:** the value of the *TimeGenerated* field.<li>**Ingested alerts:** the time that the originating product completes the production of the alert. |
+| **ProductComponentName** | string | The name of the component of the product that generated the alert. |
+| **ProductName** | string | The name of the product that generated the alert. |
+| **ProviderName** | string | The name of the alert provider (the service within the product) that generated the alert. |
+| **RemediationSteps** | string | A list of action items to take to remediate the alert. |
+| **ResourceId** | string | A unique identifier for the resource that is the subject of the alert. |
+| **SourceComputerId** | string | DEPRECATED. Was the agent ID on the server that created the alert. |
+| **SourceSystem** | string | DEPRECATED. Always populated with the string "Detection". |
+| **StartTime** | datetime | The start time of the impact of the alert. <ul><li>**Scheduled rule alerts:** the value of the *TimeGenerated* field for the first *event* captured by the query.<li>**Ingested alerts:** the time of the first event or activity included in the alert. |
+| **Status** | string | The status of the alert within the life cycle. [New / InProgress / Resolved / Dismissed / Unknown] |
+| **SystemAlertId** | string | The internal unique ID for the alert in Microsoft Sentinel. |
+| **Tactics** | string | A comma-delineated list of MITRE ATT&CK tactics associated with the alert. |
+| **Techniques** | string | A comma-delineated list of MITRE ATT&CK techniques associated with the alert. |
+| **TenantId** | string | The unique ID of the tenant. |
+| **TimeGenerated** | datetime | The time the alert was generated (in UTC). |
+| **Type** | string | The constant ('SecurityAlert') |
+| **VendorName** | string | The vendor of the product that produced the alert. |
+| **VendorOriginalId** | string | Unique ID for the specific alert instance, set by the originating product. |
+| **WorkspaceResourceGroup** | string | DEPRECATED |
+| **WorkspaceSubscriptionId** | string | DEPRECATED |
+| | | |
+
+## Next steps
+
+Learn more about security alerts and analytics rules:
+
+- [Detect threats out-of-the-box](detect-threats-built-in.md)
+
+- [Create custom analytics rules to detect threats](detect-threats-custom.md)
+
+- [Export and import analytics rules to and from ARM templates](import-export-analytics-rules.md)
sentinel Watchlist Schemas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/watchlist-schemas.md
Last updated 11/09/2021
# Microsoft Sentinel built-in watchlist template schemas (Public preview) -
-This article details the schemas used in each built-in watchlist template provided by Microsoft Sentinel. For more information, see [Create a new watchlist using a template (Public preview)](watchlists.md#create-a-new-watchlist-using-a-template-public-preview).
-
-> [!IMPORTANT]
-> The Microsoft Sentinel watchlist templates are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
+This article details the schemas used in each built-in watchlist template provided by Microsoft Sentinel. For more information, see [Create a new watchlist by using a template (Public preview)](watchlists-create.md#create-a-watchlist-by-using-a-template-public-preview).
+The Microsoft Sentinel watchlist templates are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## High Value Assets
The Service Accounts watchlist lists service accounts and their owners, and incl
## Next steps
-For more information, see [Use Microsoft Sentinel watchlists](watchlists.md).
+For more information, see,
+- [Use watchlists in Microsoft Sentinel](watchlists.md)
+- [Create watchlists](watchlists-create.md)
sentinel Watchlists Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/watchlists-create.md
+
+ Title: Create watchlists - Microsoft Sentinel
+description: Create watchlist in Microsoft Sentinel for allowlists or blocklists, to enrich event data, and help investigate threats.
+++ Last updated : 1/04/2022++
+# Create watchlists in Microsoft Sentinel
+
+Watchlists in Microsoft Sentinel allow you to correlate data from a data source you provide with the events in your Microsoft Sentinel environment. For example, you might create a watchlist with a list of high value assets, terminated employees, or service accounts in your environment.
+
+Create a watchlist from a local file or by using a template.
+
+File uploads are currently limited to files of up to 3.8 MB in size. Before you create a watchlist, review the [limitations of watchlists](watchlists.md).
+
+## Create a watchlist from a local file
+
+You can upload a CSV file from your local machine to create a watchlist.
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **Configuration**, select **Watchlist**.
+1. Select **+ Add new**.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-new.png" alt-text="new watchlist" lightbox="./media/watchlists/sentinel-watchlist-new.png":::
+
+1. On the **General** page, provide the name, description, and alias for the watchlist.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-general.png" alt-text="watchlist general page":::
+
+1. Select **Next: Source**.
+1. Use the information in the following table to upload your watchlist data.
++
+ |Field |Description |
+ |||
+ |Select a type for the dataset | CSV file with a header (.csv) |
+ |Number of lines before row with headings | Enter the number of lines before the header row that's in your data file. |
+ |Upload file | Either drag and drop your data file, or select **Browse for files** and select the file to upload. |
+ |SearchKey | Enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains country names and their respective two-letter country codes, and you expect to use the country codes often for search or joins, use the **Code** column as the SearchKey. |
+
+1. Select **Next: Review and Create**.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-source.png" alt-text="watchlist source page" lightbox="./media/watchlists/sentinel-watchlist-source.png":::
++
+1. Review the information, verify that it's correct, wait for the **Validation passed** message, and then select **Create**.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-review.png" alt-text="watchlist review page":::
+
+ A notification appears once the watchlist is created.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-complete.png" alt-text="watchlist successful creation notification" lightbox="./media/watchlists/sentinel-watchlist-complete.png":::
+
+## Create a watchlist by using a template (public preview)
+
+Download one of the watchlist templates from Microsoft Sentinel to populate with your data. Then upload that file when you create the watchlist in Microsoft Sentinel.
+
+Each built-in watchlist template has its own set of data listed in the CSV file attached to the template. For more information, see [Built-in watchlist schemas](watchlist-schemas.md).
+
+The ability to create a watchlist by using a template is currently in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+To download one of the watchlist templates,
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **Configuration**, select **Watchlist**.
+1. Select the tab **Templates (Preview)**.
+1. Select a template from the list to view details of the template in the right pane.
+1. Select the ellipses **...** at the end of the row.
+1. Select **Download Schema**.
+
+ :::image type="content" source="./media/watchlists/create-watchlist-download-schema.png" alt-text="Screenshot of templates tab with download schema selected.":::
+
+1. Populate your local version of the file and save it locally as a CSV file.
+
+To create the watchlist from the template you populated,
+
+1. From appropriate workspace in Microsoft Sentinel, select **Watchlist**.
+1. Select the tab **Templates (Preview)**.
+1. Select the appropriate template from the list to view details of the template in the right pane.
+1. Select **Create from template**,
+
+ :::image type="content" source="./media/watchlists/create-watchlist-from-template.png" alt-text="Create a watchlist from a built-in template." lightbox="./media/watchlists/create-watchlist-from-template.png":::
+
+1. On the **General** tab, notice that the **Name**, **Description**, and **Watchlist Alias** fields are all read-only.
+1. On the **Source** tab, select **Browse for files** and select the file you created from the template.
+1. Select **Next: Review and Create** > **Create**.
+
+## Deleted and recreated watchlists in Log Analytics view
+
+If you delete and recreate a watchlist, you might see both the deleted and recreated entries in Log Analytics within the five-minute SLA for data ingestion. If you see these entries together in Log Analytics for a longer period of time, submit a support ticket.
+
+## Next steps
+
+To learn more about Microsoft Sentinel, see the following articles:
+- Learn how to [get visibility into your data and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
+- [Manage watchlists](watchlists-manage.md)
+- [Build queries and detection rules with watchlists](watchlists-queries.md)
sentinel Watchlists Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/watchlists-manage.md
+
+ Title: Edit watchlist - Microsoft Sentinel
+description: Edit or add items to watchlists in Microsoft Sentinel watchlists.
+++ Last updated : 1/04/2022++
+# Manage watchlists in Microsoft Sentinel
+
+We recommend you edit an existing watchlist instead of deleting and recreating a watchlist. Log analytics has a five-minute SLA for data ingestion. If you delete and recreate a watchlist, you might see both the deleted and recreated entries in Log Analytics during this five-minute window. If you see these duplicate entries in Log Analytics for a longer period of time, submit a support ticket.
+
+## Edit a watchlist in Microsoft Sentinel
+
+Edit a watchlist to edit or add an item to the watchlist.
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **Configuration**, select **Watchlist**.
+1. Select the watchlist you want to edit.
+1. Select **Edit watchlist items** on the details pane.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-edit.png" alt-text="Screen shot showing how to edit a watchlist" lightbox="./media/watchlists/sentinel-watchlist-edit.png":::
+
+1. To edit an existing watchlist item,
+ 1. Select the checkbox of that watchlist item.
+ 1. Edit the item.
+ 1. Select **Save**.
+ 1. Select **Yes** at the confirmation prompt.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-edit-change.png" alt-text="Screen shot showing how to mark and edit a watchlist item.":::
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-edit-confirm.png" alt-text="Screen shot confirm your changes.":::
+
+1. To add a new item to your watchlist,
+ 1. Select **Add new**.
+ 1. Fill in the fields in the **Add watchlist item** panel.
+ 1. Select **Add** at the bottom of that panel.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-edit-add.png" alt-text="Screen shot showing how to add a new item to your watchlist.":::
+
+## Next steps
+
+To learn more about Microsoft Sentinel, see the following articles:
+
+- [Use watchlists in Microsoft Sentinel](watchlists.md)
+- Learn how to [get visibility into your data and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
+
sentinel Watchlists Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/watchlists-queries.md
+
+ Title: Build queries or rules with watchlists - Microsoft Sentinel
+description: Use watchlists in searches or detection rules for Microsoft Sentinel.
+++ Last updated : 1/04/2022++
+# Build queries or detection rules with watchlists in Microsoft Sentinel
+
+Query data in any table against data from a watchlist by treating the watchlist as a table for joins and lookups. When you create a watchlist, you define the *SearchKey*. The search key is the name of a column in your watchlist that you expect to use as a join with other data or as a frequent object of searches.
+
+For optimal query performance, use **Searchkey** as the key for joins in your queries.
+
+## Build queries with watchlists
+
+To use a watchlist in search query, write a Kusto query that uses the _GetWatchlist('watchlist-name') function and uses **SearchKey** as the key for your join.
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **Configuration**, select **Watchlist**.
+1. Select the watchlist you want to use.
+1. Select **View in Log Analytics**.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-list.png" alt-text="use watchlists in queries" lightbox="./media/watchlists/sentinel-watchlist-queries-list.png" :::
+
+1. Review the **Results** tab. The items in your watchlist are automatically extracted for your query.
+
+ The example below shows the results of the extraction of the **Name** and **IP Address** fields. The **SearchKey** is shown as its own column.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-fields.png" alt-text="queries with watchlist fields" lightbox="./media/watchlists/sentinel-watchlist-queries-fields.png":::
+
+ The timestamp on your queries will be ignored in both the query UI and in scheduled alerts.
+
+1. Write a query that uses the _GetWatchlist('watchlist-name') function and uses **SearchKey** as the key for your join.
+
+ For example, the following example query joins the `RemoteIPCountry` column in the `Heartbeat` table with the search key defined for the watchlist named mywatchlist.
+
+ ```kusto
+ Heartbeat
+ | lookup kind=leftouter _GetWatchlist('mywatchlist')
+ on $left.RemoteIPCountry == $right.SearchKey
+ ```
+
+ The following image shows the results of this example query in Log Analytics.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-join.png" alt-text="Screenshot of queries against watchlist as lookup" lightbox="./media/watchlists/sentinel-watchlist-queries-join.png":::
+
+## Create an analytics rule with a watchlist
+
+To use watchlists in analytics rules, create a rule using the _GetWatchlist('watchlist-name') function in the query.
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **Configuration**, select **Analytics**.
+1. Select **Create** and the type of rule you want to create.
+1. On the **General**, enter the appropriate information.
+1. On the **Set rule logic** tab, under **Rule query** use the `_GetWatchlist('<watchlist>')` function in the query.
+
+ For example, let's say you have a watchlist named ΓÇ£ipwatchlistΓÇ¥ that you created from a CSV file with the following values:
+
+ |IPAddress,Location |
+ ||
+ | 10.0.100.11,Home |
+ |172.16.107.23,Work |
+ |10.0.150.39,Home |
+ |172.20.32.117,Work |
+
+ The CSV file looks something like the following image.
+ :::image type="content" source="./media/watchlists/create-watchlist.png" alt-text="Screenshot of four items in a CSV file that's used for the watchlist.":::
+
+ To use the `_GetWatchlist` function for this example, your query would be `_GetWatchlist('ipwatchlist')`.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-new-other.png" alt-text="Screenshot that shows the query returns the four items from the watchlist.":::
+
+ In this example, we only include events from IP addresses in the watchlist:
+
+ ```kusto
+ //Watchlist as a variable
+ let watchlist = (_GetWatchlist('ipwatchlist') | project IPAddress);
+ Heartbeat
+ | where ComputerIP in (watchlist)
+ ```
+
+ The following example query uses the watchlist inline with the query and the search key defined for the watchlist.
+
+ ```kusto
+ //Watchlist inline with the query
+ //Use SearchKey for the best performance
+ Heartbeat
+ | where ComputerIP in (
+ (_GetWatchlist('ipwatchlist')
+ | project SearchKey)
+ )
+ ```
+
+ The following image shows this last query used in the rule query.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-analytics-rule.png" alt-text="use watchlists in analytics rules":::
+
+1. Complete the rest of the tabs in the **Analytics rule wizard**.
+
+For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md).
+
+## View list of watchlist aliases
+
+You might need to see a list of watchlist aliases to identify a watchlist to use in a query or analytics rule.
+
+1. In the Azure portal, go to **Microsoft Sentinel** and select the appropriate workspace.
+1. Under **General**, select **Logs**.
+1. If you see a list of queries, closes the **Queries** window.
+1. On the **New Query** page, run the following query: `_GetWatchlistAlias`.
+1. Review the list of aliases in the **Results** tab.
+
+ :::image type="content" source="./media/watchlists/sentinel-watchlist-alias.png" alt-text="list watchlists" lightbox="./media/watchlists/sentinel-watchlist-alias.png":::
+
+## Next steps
+
+In this document, you learned how to use watchlists in Microsoft Sentinel to enrich data and improve investigations. To learn more about Microsoft Sentinel, see the following articles:
+
+- [Create watchlists](watchlists-create.md)
+- Learn how to [get visibility into your data and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Watchlists https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/watchlists.md
Title: Use Microsoft Sentinel watchlists
-description: This article describes how to use Microsoft Sentinel watchlists to create allowlists/blocklists, enrich event data, and assist in investigating threats.
---
+ Title: What is a watchlist - Microsoft Sentinel
+description: Learn what watchlists are in Microsoft and when to use them.
+++ Previously updated : 11/09/2021 Last updated : 1/04/2022
-# Use Microsoft Sentinel watchlists
+# Use watchlists in Microsoft Sentinel
+Watchlists in Microsoft Sentinel allow you to correlate data from a data source you provide with the events in your Microsoft Sentinel environment. For example, you might create a watchlist with a list of high-value assets, terminated employees, or service accounts in your environment.
-Microsoft Sentinel watchlists enable the collection of data from external data sources for correlation with the events in your Microsoft Sentinel environment. Once created, you can use watchlists in your search, detection rules, threat hunting, and response playbooks. Watchlists are stored in your Microsoft Sentinel workspace as name-value pairs and are cached for optimal query performance and low latency.
+Use watchlists in your search, detection rules, threat hunting, and response playbooks.
-> [!IMPORTANT]
-> Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
+Watchlists are stored in your Microsoft Sentinel workspace as name-value pairs and are cached for optimal query performance and low latency.
-Common scenarios for using watchlists include:
+## When to use watchlists
-- **Investigating threats** and responding to incidents quickly with the rapid import of IP addresses, file hashes, and other data from CSV files. Once imported, you can use watchlist name-value pairs for joins and filters in alert rules, threat hunting, workbooks, notebooks, and general queries.
+Use watchlists to help you with following scenarios:
-- **Importing business data** as a watchlist. For example, import user lists with privileged system access, or terminated employees, and then use the watchlist to create allowlists and blocklists used to detect or prevent those users from logging in to the network.
+- **Investigate threats** and respond to incidents quickly with the rapid import of IP addresses, file hashes, and other data from CSV files. After you import the data, use watchlist name-value pairs for joins and filters in alert rules, threat hunting, workbooks, notebooks, and general queries.
-- **Reducing alert fatigue**. Create allowlists to suppress alerts from a group of users, such as users from authorized IP addresses that perform tasks that would normally trigger the alert, and prevent benign events from becoming alerts.
+- **Import business data** as a watchlist. For example, import user lists with privileged system access, or terminated employees. Then, use the watchlist to create allowlists and blocklists to detect or prevent those users from logging in to the network.
-- **Enriching event data**. Use watchlists to enrich your event data with name-value combinations derived from external data sources.
+- **Reduce alert fatigue**. Create allowlists to suppress alerts from a group of users, such as users from authorized IP addresses that perform tasks that would normally trigger the alert. Prevent benign events from becoming alerts.
-> [!NOTE]
-> - The use of watchlists should be limited to reference data, as they are not designed for large data volumes.
->
-> - The **total number of active watchlist items** across all watchlists in a single workspace is currently limited to **10 million**. Deleted watchlist items do not count against this total. If you require the ability to reference large data volumes, consider ingesting them using [custom logs](../azure-monitor/agents/data-sources-custom-logs.md) instead.
->
-> - Watchlists can only be referenced from within the same workspace. Cross-workspace and/or Lighthouse scenarios are currently not supported.
+- **Enrich event data**. Use watchlists to enrich your event data with name-value combinations derived from external data sources.
-## Create a new watchlist
+## Limitations of watchlists
-1. From the Azure portal, navigate to **Microsoft Sentinel** > **Configuration** > **Watchlist** and then select **+ Add new**.
+Before you create a watchlist, be aware of the following limitations:
- :::image type="content" source="./media/watchlists/sentinel-watchlist-new.png" alt-text="new watchlist" lightbox="./media/watchlists/sentinel-watchlist-new.png":::
+- The use of watchlists should be limited to reference data, as they aren't designed for large data volumes.
+- The **total number of active watchlist items** across all watchlists in a single workspace is currently limited to **10 million**. Deleted watchlist items don't count against this total. If you require the ability to reference large data volumes, consider ingesting them using [custom logs](../azure-monitor/agents/data-sources-custom-logs.md) instead.
+- Watchlists can only be referenced from within the same workspace. Cross-workspace and/or Lighthouse scenarios are currently not supported.
+- File uploads are currently limited to files of up to 3.8 MB in size.
-1. On the **General** page, provide the name, description, and alias for the watchlist, and then select **Next: Source**.
+## Options to create watchlists
- :::image type="content" source="./media/watchlists/sentinel-watchlist-general.png" alt-text="watchlist general page":::
+You can create a watchlist from a local file you created or by using a template (in public preview).
-1. On the **Source** page, select the dataset type (currently only CSV is available), enter the number of lines **before the header row** in your data file, and then choose a file to upload in one of two ways:
- 1. Click the **Browse for files** link in the **Upload file** box and select your data file to upload.
- 1. Drag and drop your data file onto the **Upload file** box.
+To create a watchlist from a template, download the watchlist templates from Microsoft Sentinel and populate it with your data. Then upload that file when you create the watchlist in Microsoft Sentinel.
- You will see a preview of the first 50 rows of results in the wizard screen.
+For more information, see the following articles:
-1. In the **SearchKey** field, enter the name of a column in your watchlist that you expect to use as a join with other data or a frequent object of searches. For example, if your server watchlist contains country names and their respective two-letter country codes, and you expect to use the country codes often for search or joins, use the **Code** column as the SearchKey.
+- [Create watchlists in Microsoft Sentinel](watchlists-create.md)
+- [Built-in watchlist schemas](watchlist-schemas.md)
-1. <a name="review-and-create"></a>Select **Next: Review and Create**.
+## Watchlists in queries for searches and detection rules
- :::image type="content" source="./media/watchlists/sentinel-watchlist-source.png" alt-text="watchlist source page" lightbox="./media/watchlists/sentinel-watchlist-source.png":::
+Query data in any table against data from a watchlist by treating the watchlist as a table for joins and lookups. When you create a watchlist, you define the *SearchKey*. The search key is the name of a column in your watchlist that you expect to use as a join with other data or as a frequent object of searches. For example, suppose you have a server watchlist that contains country names and their respective two-letter country codes. You expect to use the country codes often for search or joins. So you use the country code column as the search key.
- > [!NOTE]
- >
- > File uploads are currently limited to files of up to 3.8 MB in size.
+The following example query joins the `RemoteIPCountry` column in the `Heartbeat` table with the search key defined for the watchlist named mywatchlist.
-1. Review the information, verify that it is correct, wait for the *Validation passed* message, and then select **Create**.
-
- :::image type="content" source="./media/watchlists/sentinel-watchlist-review.png" alt-text="watchlist review page":::
-
- A notification appears once the watchlist is created.
-
- :::image type="content" source="./media/watchlists/sentinel-watchlist-complete.png" alt-text="watchlist successful creation notification" lightbox="./media/watchlists/sentinel-watchlist-complete.png":::
--
-## Create a new watchlist using a template (Public preview)
-
-1. From the Azure portal, navigate to **Microsoft Sentinel** > **Configuration** > **Watchlist** > **Templates (Preview)**.
-
-1. Select a template from the list to view details on the right, and then select **Create from template** to create your watchlist.
-
- :::image type="content" source="./media/watchlists/create-watchlist-from-template.png" alt-text="Create a watchlist from a built-in template." lightbox="./media/watchlists/create-watchlist-from-template.png":::
-
-1. Continue in the **Watchlist wizard**:
-
- - When using a watchlist template, the watchlist's **Name**, **Description**, and **Watchlist Alias** values are all read-only.
-
- - Select **Download Schema** to download a CSV file that contains the relevant schema expected for the selected watchlist template.
-
- Each built-in watchlist template has it's own set of data listed in the CSV file attached to the template. For more information, see [Built-in watchlist schemas](watchlist-schemas.md)
-
-1. Populate your local version of the CSV file, and then upload it back into the wizard.
-
-1. Continue as you would when [creating a new watchlist from scratch](#review-and-create), and then use your watchlist with [queries](#use-watchlists-in-queries) and [analytics rules](#use-watchlists-in-analytics-rules).
-
-## Use watchlists in queries
-
-> [!TIP]
-> For optimal query performance, use **SearchKey** (representing the field you defined in creating the watchlist) as the key for joins in your queries. See the example below.
-
-1. From the Azure portal, navigate to **Microsoft Sentinel** > **Configuration** > **Watchlist**, select the watchlist you want to use, and then select **View in Log Analytics**.
-
- :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-list.png" alt-text="use watchlists in queries" lightbox="./media/watchlists/sentinel-watchlist-queries-list.png" :::
-
-1. The items in your watchlist are automatically extracted for your query, and will appear on the **Results** tab. The example below shows the results of the extraction of the **Name** and **IP Address** fields. The **SearchKey** is shown as its own column.
-
- > [!NOTE]
- > The timestamp on your queries will be ignored in both the query UI and in scheduled alerts.
-
- :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-fields.png" alt-text="queries with watchlist fields" lightbox="./media/watchlists/sentinel-watchlist-queries-fields.png":::
-
-1. You can query data in any table against data from a watchlist by treating the watchlist as a table for joins and lookups. Use **SearchKey** as the key for your join.
-
- ```kusto
- Heartbeat
+ ```kusto
+ Heartbeat
| lookup kind=leftouter _GetWatchlist('mywatchlist') on $left.RemoteIPCountry == $right.SearchKey
- ```
- :::image type="content" source="./media/watchlists/sentinel-watchlist-queries-join.png" alt-text="queries against watchlist as lookup" lightbox="./media/watchlists/sentinel-watchlist-queries-join.png":::
-
-## Use watchlists in analytics rules
+ ```
-> [!TIP]
-> For optimal query performance, use **SearchKey** (representing the field you defined in creating the watchlist) as the key for joins in your queries. See the example below.
+Let's look some other example queries.
-To use watchlists in analytics rules, from the Azure portal, navigate to **Microsoft Sentinel** > **Configuration** > **Analytics**, and create a rule using the `_GetWatchlist('<watchlist>')` function in the query.
+Suppose you want to use a watchlist in an analytics rule. You create a watchlist called ΓÇ£ipwatchlistΓÇ¥ that includes columns for "IPAddress" and "Location". You define "IPAddress" as the search key.
-1. In this example, create a watchlist called ΓÇ£ipwatchlistΓÇ¥ with the following values:
+ |IPAddress,Location |
+ ||
+ | 10.0.100.11,Home |
+ |172.16.107.23,Work |
+ |10.0.150.39,Home |
+ |172.20.32.117,Work |
- :::image type="content" source="./media/watchlists/create-watchlist.png" alt-text="list of four items for watchlist":::
+To only include events from IP addresses in the watchlist, you might use a query where watchlist is used as a variable or where the watchlist is used inline.
- :::image type="content" source="./media/watchlists/sentinel-watchlist-new-other.png" alt-text="create watchlist with four items":::
+The following example query uses the watchlist as a variable:
-1. Next, create the analytics rule. In this example, we only include events from IP addresses in the watchlist:
-
- ```kusto
+ ```kusto
//Watchlist as a variable let watchlist = (_GetWatchlist('ipwatchlist') | project IPAddress); Heartbeat | where ComputerIP in (watchlist)
- ```
- ```kusto
+ ```
+
+The following example query uses the watchlist inline with the query and the search key defined for the watchlist.
+
+ ```kusto
//Watchlist inline with the query //Use SearchKey for the best performance Heartbeat
To use watchlists in analytics rules, from the Azure portal, navigate to **Micro
(_GetWatchlist('ipwatchlist') | project SearchKey) )
- ```
-
- :::image type="content" source="./media/watchlists/sentinel-watchlist-analytics-rule.png" alt-text="use watchlists in analytics rules":::
-
-## View list of watchlists aliases
+ ```
-To get a list of watchlist aliases, from the Azure portal, navigate to **Microsoft Sentinel** > **General** > **Logs**, and run the following query: `_GetWatchlistAlias`. You can see the list of aliases in the **Results** tab.
+For more information, see [Build queries and detection rules with watchlists in Microsoft Sentinel](watchlists-queries.md).
- :::image type="content" source="./media/watchlists/sentinel-watchlist-alias.png" alt-text="list watchlists" lightbox="./media/watchlists/sentinel-watchlist-alias.png":::
-
-## Manage your watchlist in the Microsoft Sentinel portal
-
-You can also view, edit, and create new watchlist items directly from the Watchlist blade in the Microsoft Sentinel portal.
-
-1. To edit your watchlist, navigate to **Microsoft Sentinel > Configuration > Watchlist**, select the watchlist you want to edit, and select **Edit watchlist items** on the details pane.
-
- :::image type="content" source="./media/watchlists/sentinel-watchlist-edit.png" alt-text="Screen shot showing how to edit a watchlist" lightbox="./media/watchlists/sentinel-watchlist-edit.png":::
-
-1. To edit an existing watchlist item, mark the checkbox of that watchlist item, edit the item, and select **Save**. Select **Yes** at the confirmation prompt.
-
- :::image type="content" source="./media/watchlists/sentinel-watchlist-edit-change.png" alt-text="Screen shot showing how to mark and edit a watchlist item.":::
-
- :::image type="content" source="./media/watchlists/sentinel-watchlist-edit-confirm.png" alt-text="Screen shot confirm your changes.":::
-
-1. To add a new item to your watchlist, select **Add new** on the **Edit watchlist items** screen, fill in the fields in the **Add watchlist item** panel, and select **Add** at the bottom of that panel.
+## Next steps
- :::image type="content" source="./media/watchlists/sentinel-watchlist-edit-add.png" alt-text="Screen shot showing how to add a new item to your watchlist.":::
+To learn more about Microsoft Sentinel, see the following articles:
-## Next steps
-In this document, you learned how to use watchlists in Microsoft Sentinel to enrich data and improve investigations. To learn more about Microsoft Sentinel, see the following articles:
+- [Create watchlists](watchlists-create.md)
+- [Build queries and detection rules with watchlists](watchlists-queries.md)
+- [Manage watchlists](watchlists-manage.md)
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md). - [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Web Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/web-normalization-schema.md
Web Session events may also include [User](network-normalization-schema.md#user)
## Parsers
-### Source-agnostic parsers
+For more information about ASIM parsers, see the [ASIM parsers overview](normalization-parsers-overview.md) and [Use ASIM parsers](normalization-about-parsers.md).
-To use the source-agnostic parsers that unify all of the out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the following KQL functions as the table name in your query:
-
-| Name | Description | Usage instructions |
-| - | | |
-| <a name="imwebsession"></name>**imWebSession** | Aggregative parser that uses *union* to include normalized events from all *Web Session* sources. <br><br>Example: Network sessions fields that support [HTTP session fields](#http-session-fields) |- Update this parser if you want to add or remove sources from source-agnostic analytics.<br><br>- Use this function in your source-agnostic queries.|
-| **ASimWebSession** | Similar to the [imWebSession](#imwebsession) function, but without parameter support, and therefore does not force the **Logs** page time picker to use the `custom` value. |- Update these parsers if you want to add or remove sources from source-agnostic analytics.<br><br>- Use this function in your source-agnostic queries if you don't plan to use parameters.|
-| **vimWebSession\<vendor\>\<product\>** | Source-specific parsers implement normalization for a specific source. |- Add a source-specific parser for a source when there is no out-of-the-box normalizing parser. Update the `im` aggregative parser to include reference to your new parser. <br><br>- Update a source-specific parser to resolve parsing and normalization issues.<br><br>- Use a source-specific parser for source-specific analytics.|
-| **ASimWebSession\<vendor\>\<product\>** | Source-specific parsers implement normalization for a specific source. <br><br>Unlike the `vim*` functions, the `ASim*` functions do not support parameters. |- Add a source-specific parser for a source when there is no out-of-the-box normalizing parser. Update the aggregative `ASim` parser to include reference to your new parser.<br><br>- Update a source-specific parser to resolve parsing and normalization issues.<br><br>- Use an `ASim` source-specific parser for interactive queries when not using parameters.|
-| | | |
+### Unifying parsers
+To use the unifying parsers that unify all of the out-of-the-box parsers, and ensure that your analysis runs across all the configured sources, use the following KQL functions as the table name in your query.
Deploy ASIM parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM).
+#### <a name="imwebsession"></name>imWebSession
+
+Aggregative parser that uses *union* to include normalized events from all *Web Session* sources.
+
+Example: Network sessions fields that support [HTTP session fields](#http-session-fields)
+
+- Update this parser if you want to add or remove sources from source-agnostic analytics.
+- Use this function in your source-agnostic queries.
+
+#### ASimWebSession
+
+Similar to the [imWebSession](#imwebsession) function, but without parameter support, and therefore does not force the **Logs** page time picker to use the `custom` value.
+
+- Update these parsers if you want to add or remove sources from source-agnostic analytics.
+- Use this function in your source-agnostic queries if you don't plan to use parameters.
+
+#### vimWebSession\<vendor\>\<product\>
+
+Source-specific parsers implement normalization for a specific source.
+
+- Add a source-specific parser for a source when there is no out-of-the-box normalizing parser. Update the `im` aggregative parser to include reference to your new parser.
+- Update a source-specific parser to resolve parsing and normalization issues.
+- Use a source-specific parser for source-specific analytics.
+
+#### ASimWebSession\<vendor\>\<product\>
+
+Source-specific parsers implement normalization for a specific source.
+
+Unlike the `vim*` functions, the `ASim*` functions do not support parameters.
+
+- Add a source-specific parser for a source when there is no out-of-the-box normalizing parser. Update the aggregative `ASim` parser to include reference to your new parser.
+- Update a source-specific parser to resolve parsing and normalization issues.
+- Use an `ASim` source-specific parser for interactive queries when not using parameters.
++ ### Add your own normalized parsers When implementing custom parsers for the Web Session information model, name your KQL functions using the following syntax:
If the event is reported by one of the endpoints of the web session, it may incl
For more information, see: -- [Normalization in Microsoft Sentinel](normalization.md)-- [Advanced SIEM Information Model schemas](normalization-about-schemas.md)-- [Advanced SIEM Information Model parsers](normalization-about-parsers.md)-- [Advanced SIEM Information Model content](normalization-content.md)
+- Watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG)
+- [Advanced SIEM Information Model (ASIM) overview](normalization.md)
+- [Advanced SIEM Information Model (ASIM) schemas](normalization-about-schemas.md)
+- [Advanced SIEM Information Model (ASIM) parsers](normalization-parsers-overview.md)
+- [Advanced SIEM Information Model (ASIM) content](normalization-content.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
> > You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
-## January 2022
+## January 2021
+- [More workspaces supported for Multiple Workspace View](#more-workspaces-supported-for-multiple-workspace-view)
+- [Kusto Query Language workbook and tutorial](#kusto-query-language-workbook-and-tutorial)
+### More workspaces supported for Multiple Workspace View
+
+Now, instead of being limited to 10 workspaces in Microsoft Sentinel's [Multiple Workspace View](multiple-workspace-view.md), you can view data from up to 30 workspaces simultaneously.
+
+While we often recommend a single-workspace environment, some use cases require multiple use cases, such as for Managed Security Service Providers (MSSPs) and their customers. **Multiple Workspace View** lets you see and work with security incidents across several workspaces at the same time, even across tenants, allowing you to maintain full visibility and control of your organizationΓÇÖs security responsiveness.
+
+For more information, see:
+
+- [The need to use multiple Microsoft Sentinel workspaces](extend-sentinel-across-workspaces-tenants.md#the-need-to-use-multiple-microsoft-sentinel-workspaces)
+- [Work with incidents in many workspaces at once](multiple-workspace-view.md)
+- [Manage multiple tenants in Microsoft Sentinel as an MSSP](multiple-tenants-service-providers.md)
### Kusto Query Language workbook and tutorial Kusto Query Language is used in Microsoft Sentinel to search, analyze, and visualize data, as the basis for detection rules, workbooks, hunting, and more.
Watchlist templates currently include:
- **High Value Assets**. A list of devices, resources, or other assets that have critical value in the organization. - **Network Mapping**. A list of IP subnets and their respective organizational contexts.
-For more information, see [Create a new watchlist using a template](watchlists.md#create-a-new-watchlist-using-a-template-public-preview) and [Built-in watchlist schemas](watchlist-schemas.md).
+For more information, see [Create a new watchlist using a template](watchlists-create.md#create-a-watchlist-by-using-a-template-public-preview) and [Built-in watchlist schemas](watchlist-schemas.md).
sentinel Work With Anomaly Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/work-with-anomaly-rules.md
> > - Anomaly rules are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## View SOC-ML anomaly rule templates
+## View customizable anomaly rule templates
-Microsoft SentinelΓÇÖs [SOC-ML anomalies feature](soc-ml-anomalies.md) provides [built-in anomaly templates](detect-threats-built-in.md#anomaly) for immediate value out-of-the-box. These anomaly templates were developed to be robust by using thousands of data sources and millions of events, but this feature also enables you to change thresholds and parameters for the anomalies easily within the user interface. Anomaly rules must be activated before they will generate anomalies, which you can find in the **Anomalies** table in the **Logs** section.
+Microsoft SentinelΓÇÖs [customizable anomalies feature](soc-ml-anomalies.md) provides [built-in anomaly templates](detect-threats-built-in.md#anomaly) for immediate value out-of-the-box. These anomaly templates were developed to be robust by using thousands of data sources and millions of events, but this feature also enables you to change thresholds and parameters for the anomalies easily within the user interface. Anomaly rules must be activated before they will generate anomalies, which you can find in the **Anomalies** table in the **Logs** section.
1. From the Microsoft Sentinel navigation menu, select **Analytics**.
Microsoft SentinelΓÇÖs [SOC-ML anomalies feature](soc-ml-anomalies.md) provides
## Activate anomaly rules
-When you click on one of the rule templates, you will see the following information in the details pane, along with a **Create rule** button:
+When you select one of the rule templates, you will see the following information in the details pane, along with a **Create rule** button:
- **Description** explains how the anomaly works and the data it requires.
This is by design, to give you the opportunity to compare the results generated
## Next steps
-In this document, you learned how to work with SOC-ML anomaly detection analytics rules in Microsoft Sentinel.
+In this document, you learned how to work with customizable anomaly detection analytics rules in Microsoft Sentinel.
-- Get some background information about [SOC-ML](soc-ml-anomalies.md).
+- Get some background information about [customizable anomalies](soc-ml-anomalies.md).
- Explore other [analytics rule types](detect-threats-built-in.md).
service-connector Concept Region Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/concept-region-support.md
If your compute service instance is located in one of the regions that Service C
- North Europe - East US - West US 2-- Australia East-- UK South-- Japan East-- Southeast Asia ## Supported regions with geographical endpoint
Your compute service instance might be created in the region that Service Connec
- East US 2 - West US 3 - South Central US-- Australia Central-- Australia Southeast-- UK West-- Japan West-- West US-- North Central US ## Not supported regions in public preview
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-key-vault.md
# Integrate Azure Key Vault with Service Connector
+> [!NOTE]
+> When you use Service Connector to connect your key vault or manage key vault connections, Service Connector will be using your token to perform the corresponding operations.
+ This page shows the supported authentication types and client types of Azure Key Vault using Service Connector. You might still be able to connect to Azure Key Vault in other programming languages without using Service Connector. This page also shows default environment variable name and value (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md). ## Supported compute service
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
16.04 LTS | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure| 16.04 LTS | [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure| |||
-18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1126-azure </br> 4.15.0-1125-azure </br> 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-162-generic </br> 4.15.0-161-generic </br> 4.15.0-156-generic </br> 5.4.0-1061-azure to 5.4.0-1063-azure </br> 5.4.0-90-generic </br> 5.4.0-89-generic </br> 9.46 hotfix patch** |
+18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1126-azure </br> 4.15.0-1125-azure </br> 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-162-generic </br> 4.15.0-161-generic </br> 4.15.0-156-generic </br> 5.4.0-1061-azure to 5.4.0-1063-azure </br> 5.4.0-90-generic </br> 5.4.0-89-generic </br> 9.46 hotfix patch** </br> 4.15.0-1127-azure </br> 4.15.0-163-generic </br> 5.4.0-1064-azure </br> 5.4.0-91-generic |
18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 18.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic | 18.04 LTS | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 4.15.0-153-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic | 18.04 LTS |[9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic | |||
-20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-84-generic </br> 5.4.0-1058-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-1063-azure </br> 5.4.0-89-generic </br> 5.4.0-90-generic </br> 9.46 hotfix patch** |
+20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-84-generic </br> 5.4.0-1058-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-1063-azure </br> 5.4.0-89-generic </br> 5.4.0-90-generic </br> 9.46 hotfix patch** </br> 5.4.0-1064-azure </br> 5.4.0-91-generic |
20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 20.04 LTS |[9.44](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-60-generic </br> 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 5.4.0-81-generic </br> 5.4.0-1056-azure | 20.04 LTS |[9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | 5.4.0-26-generic to 5.4.0-60-generic </br> 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic |
Debian 10 | [9.41](https://support.microsoft.com/topic/update-rollup-54-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.46](https://support.microsoft.com/en-us/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.44](https://support.microsoft.com/en-us/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.56-azure </br> 4.12.14-16.65-azure </br> 4.12.14-16.68-azure | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.43](https://support.microsoft.com/en-us/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.56-azure </br> 4.12.14-16.65-azure | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.56-azure |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.44-azure </br> 4.12.14-16.47-azure through 9.41 hot fix patch**|
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.38-azure </br> 4.12.14-16.41-azure through 9.40 hot fix patch**|
#### Supported SUSE Linux Enterprise Server 15 kernel versions for Azure virtual machines **Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure </br> 5.3.18-18.72-azure
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure </br> 5.3.18-18.72-azure </br> 5.3.18-18.75-azure
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure SUSE Linux Enterprise Server 15, SP1, SP2 | [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure
static-web-apps Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/authentication-authorization.md
Azure Static Web Apps provides a streamlined authentication experience. By defau
The subjects of authentication and authorization significantly overlap with routing concepts, which are detailed in the [application configuration guide](configuration.md#routes).
+## System folder
+
+Azure Static Web Apps uses the `/.auth` system folder to provide access to authorization-related APIs. Rather than exposing any of the routes under the `/.auth` folder directly to end users, consider creating [routing rules](configuration.md#routes) to create friendly URLs.
+
+## Login
+
+Use the following table to find the provider-specific route.
+
+| Authorization provider | Login route |
+| - | -- |
+| Azure Active Directory | `/.auth/login/aad` |
+| GitHub | `/.auth/login/github` |
+| Twitter | `/.auth/login/twitter` |
+
+For example, to log in with GitHub you could include a link like the following snippet:
+
+```html
+<a href="/.auth/login/github">Login</a>
+```
+
+If you chose to support more than one provider, then you need to expose a provider-specific link for each on your website.
+
+You can use a [route rule](./configuration.md#routes) to map a default provider to a friendly route like _/login_.
+
+```json
+{
+ "route": "/login",
+ "redirect": "/.auth/login/github"
+}
+```
+
+### Post login redirect
+
+If you want a user to return to a specific page after login, provide a full qualified URL in `post_login_redirect_uri` query string parameter.
+
+For example:
+
+```html
+<a href="/.auth/login/github?post_login_redirect_uri=https://zealous-water.azurestaticapps.net/success">Login</a>
+```
+
+## Logout
+
+The `/.auth/logout` route logs users out from the website. You can add a link to your site navigation to allow the user to log out as shown in the following example.
+
+```html
+<a href="/.auth/logout">Log out</a>
+```
+
+You can use a [route rule](./configuration.md#routes) to map a friendly route like _/logout_.
+
+```json
+{
+ "route": "/logout",
+ "redirect": "/.auth/logout"
+}
+```
+
+### Post logout redirect
+
+If you want a user to return to a specific page after logout, provide a URL in `post_logout_redirect_uri` query string parameter.
+
+## Block an authorization provider
+
+You may want to restrict your app from using an authorization provider. For instance, your app may want to standardize only on [providers that expose email addresses](#provider-user-details).
+
+To block a provider, you can create [route rules](configuration.md#routes) to return a 404 for requests to the blocked provider-specific route. For example, to restrict Twitter as provider, add the following route rule.
+
+```json
+{
+ "route": "/.auth/login/twitter",
+ "statusCode": 404
+}
+```
+ ## Roles Every user who accesses a static web app belongs to one or more roles. There are two built-in roles that users can belong to:
https://<WEB_APP_DOMAIN_NAME>/.auth/purge/<AUTHENTICATION_PROVIDER_NAME>
Note that if you are using Azure Active Directory, use `aad` as the value for the `<AUTHENTICATION_PROVIDER_NAME>` placeholder.
-## System folder
-
-Azure Static Web Apps uses the `/.auth` system folder to provide access to authorization-related APIs. Rather than exposing any of the routes under the `/.auth` folder directly to end users, consider creating [routing rules](configuration.md#routes) to create friendly URLs.
-
-## Login
-
-Use the following table to find the provider-specific route.
-
-| Authorization provider | Login route |
-| - | -- |
-| Azure Active Directory | `/.auth/login/aad` |
-| GitHub | `/.auth/login/github` |
-| Twitter | `/.auth/login/twitter` |
-
-For example, to log in with GitHub you could include a link like the following snippet:
-
-```html
-<a href="/.auth/login/github">Login</a>
-```
-
-If you chose to support more than one provider, then you need to expose a provider-specific link for each on your website.
-
-You can use a [route rule](./configuration.md#routes) to map a default provider to a friendly route like _/login_.
-
-```json
-{
- "route": "/login",
- "redirect": "/.auth/login/github"
-}
-```
-
-### Post login redirect
-
-If you want a user to return to a specific page after login, provide a full qualified URL in `post_login_redirect_uri` query string parameter.
-
-For example:
-
-```html
-<a href="/.auth/login/github?post_login_redirect_uri=https://zealous-water.azurestaticapps.net/success">Login</a>
-```
-
-## Logout
-
-The `/.auth/logout` route logs users out from the website. You can add a link to your site navigation to allow the user to log out as shown in the following example.
-
-```html
-<a href="/.auth/logout">Log out</a>
-```
-
-You can use a [route rule](./configuration.md#routes) to map a friendly route like _/logout_.
-
-```json
-{
- "route": "/logout",
- "redirect": "/.auth/logout"
-}
-```
-
-### Post logout redirect
-
-If you want a user to return to a specific page after logout, provide a URL in `post_logout_redirect_uri` query string parameter.
-
-## Block an authorization provider
-
-You may want to restrict your app from using an authorization provider. For instance, your app may want to standardize only on [providers that expose email addresses](#provider-user-details).
-
-To block a provider, you can create [route rules](configuration.md#routes) to return a 404 for requests to the blocked provider-specific route. For example, to restrict Twitter as provider, add the following route rule.
-
-```json
-{
- "route": "/.auth/login/twitter",
- "statusCode": 404
-}
-```
- ## Restrictions See the [Quotas article](quotas.md) for general restrictions and limitations.
static-web-apps Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/getting-started.md
If you don't have an Azure subscription, [create a free trial account](https://a
Next, open Visual Studio Code and go to **File > Open Folder** to open the cloned repository in the editor.
+## Install Azure Static Web Apps extension
+
+If you don't already have the [Azure Static Web Apps extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) extension, you can install it in Visual Studio Code.
+
+1. Select **View** > **Extensions**.
+1. In the **Search Extensions in Marketplace**, type **Azure Static Web Apps**.
+1. Select **Install** for **Azure Static Web Apps**.
+1. The extension will install into Visual Studio Code.
+ ## Create a static web app 1. Inside Visual Studio Code, select the Azure logo in the Activity Bar to open the Azure extensions window.
synapse-analytics Synapse Workspace Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security/synapse-workspace-ip-firewall.md
You can also add IP firewall rules to a Synapse workspace after the workspace is
You can connect to your Synapse workspace using Synapse Studio. You can also use SQL Server Management Studio (SSMS) to connect to the SQL resources (dedicated SQL pools and serverless SQL pool) in your workspace.
-Make sure that the firewall on your network and local computer allows outgoing communication on TCP ports 80, 443 and 1433 for Synapse Studio.
+Make sure that the firewall on your network and local computer allows outgoing communication on TCP ports 80, 443 and 1443 for Synapse Studio.
Also, you need to allow outgoing communication on UDP port 53 for Synapse Studio. To connect using tools such as SSMS and Power BI, you must allow outgoing communication on TCP port 1433.
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-features.md
Previously updated : 04/15/2020 Last updated : 01/01/2022
Query languages used in Synapse SQL can have different supported features depend
| **Cross-database queries** | No | Yes, including [USE](/sql/t-sql/language-elements/use-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement. | | **Built-in/system functions (analysis)** | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions, except [CHOOSE](/sql/t-sql/functions/logical-functions-choose-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [PARSE](/sql/t-sql/functions/parse-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all Transact-SQL [Analytic](/sql/t-sql/functions/analytic-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Conversion, [Date and Time](/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), Logical, [Mathematical](/sql/t-sql/functions/mathematical-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true) functions. | | **Built-in/system functions ([string](/sql/t-sql/functions/string-functions-transact-sql))** | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions, except [STRING_ESCAPE](/sql/t-sql/functions/string-escape-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [TRANSLATE](/sql/t-sql/functions/translate-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes. All Transact-SQL [String](/sql/t-sql/functions/string-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), [JSON](/sql/t-sql/functions/json-functions-transact-sql?view=azure-sqldw-latest&preserve-view=true), and Collation functions. |
-| **Built-in/system functions ([Cryptographic](/sql/t-sql/functions/cryptographic-functions-transact-sql))** | Some | No |
+| **Built-in/system functions ([Cryptographic](/sql/t-sql/functions/cryptographic-functions-transact-sql))** | Some | `HASHBYTES` is the only supported cryptographic function in serverless SQL pools. |
| **Built-in/system table-value functions** | Yes, [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions), except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, [Transact-SQL Rowset functions](/sql/t-sql/functions/functions?view=azure-sqldw-latest&preserve-view=true#rowset-functions), except [OPENXML](/sql/t-sql/functions/openxml-transact-sql?view=azure-sqldw-latest&preserve-view=true), [OPENDATASOURCE](/sql/t-sql/functions/opendatasource-transact-sql?view=azure-sqldw-latest&preserve-view=true), and [OPENQUERY](/sql/t-sql/functions/openquery-transact-sql?view=azure-sqldw-latest&preserve-view=true) | | **Built-in/system aggregates** | Transact-SQL built-in aggregates except, except [CHECKSUM_AGG](/sql/t-sql/functions/checksum-agg-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [GROUPING_ID](/sql/t-sql/functions/grouping-id-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Transact-SQL built-in aggregates. | | **Operators** | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) except [!>](/sql/t-sql/language-elements/not-greater-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) and [!<](/sql/t-sql/language-elements/not-less-than-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, all [Transact-SQL operators](/sql/t-sql/language-elements/operators-transact-sql?view=azure-sqldw-latest&preserve-view=true) |
Synapse SQL pools enable you to use built-in security features to secure your da
| | Dedicated | Serverless | | | | |
-| **Logins** | N/A (only contained users are supported in databases) | Yes |
+| **Logins** | N/A (only contained users are supported in databases) | Yes server-level Azure AD and SQL logins are supported. |
| **Users** | N/A (only contained users are supported in databases) | Yes | | **[Contained users](/sql/relational-databases/security/contained-database-users-making-your-database-portable?view=azure-sqldw-latest&preserve-view=true)** | Yes. **Note:** only one Azure AD user can be unrestricted admin | No | | **SQL username/password authentication**| Yes | Yes |
-| **Azure Active Directory (Azure AD) authentication**| Yes, Azure AD users | Yes, Azure AD logins and users |
+| **Azure Active Directory (Azure AD) authentication**| Yes, Azure AD users | Yes, Azure AD logins and users can access serverless SQL pools using their Azure AD identities. |
| **Storage Azure Active Directory (Azure AD) passthrough authentication** | Yes | Yes | | **Storage SAS token authentication** | No | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) or instance-level [CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true). | | **Storage Access Key authentication** | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No |
Synapse SQL pools enable you to use built-in security features to secure your da
| **Server-level roles** | No | Yes, sysadmin, public, and other server-roles are supported | | **SERVER SCOPED CREDENTIAL** | No | Yes, the server scoped credentials are used by the `OPENROWSET` function that do not uses explicit data source. | | **Permissions - [Server-level](/sql/relational-databases/security/authentication-access/server-level-roles)** | No | Yes, for example, `CONNECT ANY DATABASE` and `SELECT ALL USER SECURABLES` enable a user to read data from any databases. |
-| **Database-scoped roles** | Yes | Yes |
+| **Database-scoped roles** | Yes | Yes, you can use `db_owner`, `db_datareader` and `db_ddladmin` roles. |
| **DATABASE SCOPED CREDENTIAL** | Yes, used in external data sources. | Yes, used in external data sources. | | **Permissions - [Database-level](/sql/relational-databases/security/authentication-access/database-level-roles?view=azure-sqldw-latest&preserve-view=true)** | Yes | Yes | | **Permissions - Schema-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema |
virtual-desktop Compare Virtual Desktop Windows 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/compare-virtual-desktop-windows-365.md
+
+ Title: Comparing Azure Virtual Desktop and Windows 365 - Azure
+description: Comparing technical features between Azure virtual Desktop and Windows 365.
++ Last updated : 01/10/2022++++
+# Comparing Azure Virtual Desktop and Windows 365
+
+Azure Virtual Desktop and Windows 365 are both great solutions for customers who want to have a seamless Windows experience while accessing their virtual desktop and apps remotely. In this article, we'll compare technical features between the two services.
+
+## Technical features
+
+The following table describes high-level differences in the technical features between Azure Virtual Desktop and Windows 365.
+
+| Feature | Azure Virtual Desktop (single-session)| Azure Virtual Desktop (multisession)| Windows 365 Enterprise | Windows 365 Business |
+|-|--|--|--|--|
+|Design|Designed to be flexible.|Designed to be flexible.|Designed to be simple and easy to use.|Designed to be simple and easy to use.|
+|Type of desktop|Personal desktop|Pooled (single and multi-session) desktop|Personal desktop|Personal desktop|
+|Pricing model|Based on your own resource usage|Based on your own resource usage|Fixed per-user pricing ([Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/))|Fixed per-user pricing ([Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/))|
+|Subscription|Customer-managed|Customer-managed|Microsoft-managed (except networking)|Fully Microsoft-managed|
+|VM stock-keeping units (SKUs)|Any Azure virtual machine (VM) including graphics processing unit (GPU)-enabled SKUs|Any Azure VM including GPU-enabled SKUs|Multiple optimized options for a range of use cases|Multiple optimized options for a range of use cases|
+|Backup|Azure backup services|Azure backup services|Local redundant storage for disaster recovery|Local redundant storage for disaster recovery|
+|Networking|Customer-managed|Microsoft-managed|Customer-managed|Microsoft-managed|
+|Identity|Domain join with Active Directory Domain Services (AD DS) or Azure AD DS, Hybrid Azure AD join, or Azure AD join |Domain join with AD DS or Azure AD DS, Hybrid Azure AD join, or Azure AD join |Hybrid Join, Azure AD join |Azure AD join (can't use AD DS)|
+|User profiles|Azure Files, Azure NetApp Files, or VM-based storage for FSLogix for pooled host pools, and an option for local profiles for personal desktops|Azure Files, Azure NetApp Files, VM-based storage for FSLogix for pooled host pools, and an option for local profiles for personal desktops|Local profiles, offered as software-as-a-service (SaaS)|Local profiles (offered as SaaS)|
+|Operating systems|Windows 10 Enterprise and Windows 11 Enterprise (single session and multi-session) <br>Windows Server 2012 R2, 2016, 2019 (single session and multi-session)<br>Windows 7 Enterprise (single session)|Windows 10 Enterprise and Windows 11 Enterprise (single session and multi-session) <br>Windows Server 2012 R2, 2016, 2019 (single session and multi-session)<br>Windows 7 Enterprise (single session)|Windows 10 Enterprise and Windows 11 Enterprise (single session)|Windows 10 Enterprise and Windows 11 Enterprise (single session)|
+|Base image|Custom and Microsoft-provided|Custom and Microsoft-provided|Custom and Microsoft-provided|Microsoft-provided only|
+|VM location|[Any Azure region](data-locations.md)|[Any Azure region](data-locations.md)|[Most geographies](/windows-365/enterprise/requirements#supported-azure-regions-for-cloud-pc-provisioning)|[Most geographies](/windows-365/enterprise/requirements#supported-azure-regions-for-cloud-pc-provisioning)|
+|Remote app streaming|Supported|Supported|Not supported|Not supported|
+
+## Deployment and management
+
+The following table describes differences when deploying and managing Azure Virtual Desktop and Windows 365.
+
+| Feature | Azure Virtual Desktop (single-session)| Azure Virtual Desktop (multisession)| Windows 365 Enterprise | Windows 365 Business |
+|-|--|--|--||
+|Hybrid (on-premises) or multi-cloud support|Supported with Azure Stack HCI (public preview), Citrix, and VMware|Supported with Azure Stack HCI (public preview), Citrix, and VMware|Unavailable|Unavailable|
+|On-premises connection|Supported by ExpressRoute, VPN, Azure Gateway, and SD-WAN|Supported by ExpressRoute, VPN, Azure Gateway, and SD-WAN|Supported by ExpressRoute, VPN, Azure Gateway, and SD-WAN|Supported by ExpressRoute, VPN, Azure Gateway, and SD-WAN|
+|Management portal|Azure portal (deploy and manage), Microsoft Endpoint Manager (manage only)|Azure portal (deploy and manage), Microsoft Endpoint Manager (manage only)|Microsoft Endpoint Manager|End-user portal|
+|Image management|Custom images and Microsoft-managed image management|Custom images and Microsoft-managed image management|Custom images and Microsoft-managed image management| Microsoft-managed image management only|
+|Screen capture protection|Yes (feature currently in preview)|Yes (feature currently in preview)|Yes (feature currently in preview)|Yes (feature currently in preview)|
+|Updating and patching process|Similar to physical PC|Similar to physical PC|Similar to physical PC|Similar to physical PC|
+|Autoscaling|N/A|Supported with the Autoscaling tool (preview)|N/A|N/A|
+|Application delivery|Microsoft Endpoint Manager, MSIX app attach, custom images, or Microsoft-approved partner solutions|Microsoft Endpoint Manager, MSIX app attach, custom images, or Microsoft-approved partner solutions|Same as physical PC|Same as physical PC|
+|Monitoring|Azure Virtual Desktop Insights, powered by Azure Monitor|Azure Virtual Desktop Insights, powered by Azure Monitor|Similar to physical PC|Similar to physical PC|
+|Environment validation|[Required URL check tool](safe-url-list.md)|[Required URL check tool](safe-url-list.md)|Offered as SaaS|Offered as SaaS|
+|App lifecycle management|MEM, SCCM, MSI, EXE, MSIX, App-V, and others with MSIX app attach or partner solutions|MEM, SCCM, MSI, EXE, MSIX, App-V, and others with MSIX app attach or partner solutions|Same as physical PC (MEM, SSCM, MSI, EXE, MSIX, App-V, and so on)|Same as physical PC (MEM, SSCM, MSI, EXE, MSIX, App-V, and so on)|
+
+## User experience
+
+The following table compares user experience when using Azure Virtual Desktop and Windows 365.
+
+| Feature | Azure Virtual Desktop (single-session)| Azure Virtual Desktop (multisession)| Windows 365 Enterprise | Windows 365 Business |
+|-|--|--|--||
+|Client|Windows, Mac, iOS, Android, HTML, Linux SDK|Windows, Mac, iOS, Android, HTML, Linux SDK|Windows, Mac, iOS, Android, HTML, Linux SDK|Windows, Mac, iOS, Android, HTML, Linux SDK|
+|Printing|Universal Print and print redirection support, network printers|Universal Print and print redirection support, network printers|Universal print and print redirection support|Universal print and print redirection support|
+|Protocol|Remote Desktop Protocol (RDP)|RDP|RDP|RDP|
+|End-user portal capabilities|IT uses the Azure portal to manage deployments|IT uses the Azure portal to manage deployments|User sign in, start VM, troubleshooting, restart, rename and profile reset, VM and disk resizing, OS choice|User sign in, start VM, troubleshooting, restart, rename and profile reset, VM and disk resizing, OS choice|
+
+## Licensing and pricing
+
+The following table describes the difference in licensing and pricing costs for both Azure Virtual Desktop and Windows 365.
+
+| Feature | Azure Virtual Desktop (single-session)| Azure Virtual Desktop (multisession)| Windows 365 Enterprise | Windows 365 Business |
+|-|--|--|--||
+|License costs|Use existing internal license (internal users only) or use monthly per-user access pricing (for commercial remote app streaming to external users only)|Use existing internal license (internal users only) or use monthly per-user access pricing (for commercial remote app streaming to external users only)|Monthly per-user pricing|Monthly per-user pricing|
+|Infrastructure costs|Based on consumption|Based on consumption|Included except for egress charges over base quota|Included|
+|Microsoft Endpoint Manager|Optional|Optional|Required|Optional|
+
+## Next steps
+
+- To learn more about Azure Virtual Desktop pricing, see [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+- To learn more about Windows 365 pricing, see [Windows 365 plans and pricing](https://www.microsoft.com/windows-365/all-pricing).
virtual-desktop Security Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/security-guide.md
description: Best practices for keeping your Azure Virtual Desktop environment secure. Previously updated : 12/15/2020 Last updated : 01/11/2022
We recommend enabling Microsoft Defender for Cloud's enhanced security features
- Assess compliance with common frameworks like PCI. - Strengthen the overall security of your environment.
-To learn more, see [Enable enhanced security features](../security-center/enable-enhanced-security.md).
+To learn more, see [Enable enhanced security features](../defender-for-cloud/enable-enhanced-security.md).
### Improve your Secure Score
-Secure Score provides recommendations and best practice advice for improving your overall security. These recommendations are prioritized to help you pick which ones are most important, and the Quick Fix options help you address potential vulnerabilities quickly. These recommendations also update over time, keeping you up to date on the best ways to maintain your environmentΓÇÖs security. To learn more, see [Improve your Secure Score in Microsoft Defender for Cloud](../security-center/secure-score-security-controls.md).
+Secure Score provides recommendations and best practice advice for improving your overall security. These recommendations are prioritized to help you pick which ones are most important, and the Quick Fix options help you address potential vulnerabilities quickly. These recommendations also update over time, keeping you up to date on the best ways to maintain your environmentΓÇÖs security. To learn more, see [Improve your Secure Score in Microsoft Defender for Cloud](../defender-for-cloud/secure-score-security-controls.md).
## Azure Virtual Desktop security best practices
For profile solutions like FSLogix or other solutions that mount VHD files, we r
### Install an endpoint detection and response product
-We recommend you install an endpoint detection and response (EDR) product to provide advanced detection and response capabilities. For server operating systems with [Microsoft Defender for Cloud](../security-center/security-center-services.md) enabled, installing an EDR product will deploy Defender ATP. For client operating systems, you can deploy [Defender ATP](/windows/security/threat-protection/microsoft-defender-atp/onboarding) or a third-party product to those endpoints.
+We recommend you install an endpoint detection and response (EDR) product to provide advanced detection and response capabilities. For server operating systems with [Microsoft Defender for Cloud](../defender-for-cloud/integration-defender-for-endpoint.md) enabled, installing an EDR product will deploy Microsoft Defender for Endpoint. For client operating systems, you can deploy [Microsoft Defender for Endpoint](/windows/security/threat-protection/microsoft-defender-atp/onboarding) or a third-party product to those endpoints.
### Enable threat and vulnerability management assessments
-Identifying software vulnerabilities that exist in operating systems and applications is critical to keeping your environment secure. Microsoft Defender for Cloud can help you identify problem spots through vulnerability assessments for server operating systems. You can also use Defender ATP, which provides threat and vulnerability management for desktop operating systems. You can also use third-party products if you're so inclined, although we recommend using Microsoft Defender for Cloud and Defender ATP.
+Identifying software vulnerabilities that exist in operating systems and applications is critical to keeping your environment secure. Microsoft Defender for Cloud can help you identify problem spots through [Microsoft Defender for Endpoint's threat and vulnerability management solution](../defender-for-cloud/deploy-vulnerability-assessment-tvm.md). You can also use third-party products if you're so inclined, although we recommend using Microsoft Defender for Cloud and Microsoft Defender for Endpoint.
### Patch software vulnerabilities in your environment
By restricting operating system capabilities, you can strengthen the security of
- Restrict Windows Explorer access by hiding local and remote drive mappings. This prevents users from discovering unwanted information about system configuration and users. -- Avoid direct RDP access to session hosts in your environment. If you need direct RDP access for administration or troubleshooting, enable [just-in-time](../security-center/security-center-just-in-time.md) access to limit the potential attack surface on a session host.
+- Avoid direct RDP access to session hosts in your environment. If you need direct RDP access for administration or troubleshooting, enable [just-in-time](../defender-for-cloud/just-in-time-access-usage.md) access to limit the potential attack surface on a session host.
- Grant users limited permissions when they access local and remote file systems. You can restrict permissions by making sure your local and remote file systems use access control lists with least privilege. This way, users can only access what they need and can't change or delete critical resources.
virtual-machines Av2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/av2-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Av2-series VMs can be deployed on a variety of hardware types and processors. Av2-series VMs have CPU performance and memory configurations best suited for entry level workloads like development and test. The size is throttled to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine. Some example use cases include development and test servers, low traffic web servers, small to medium databases, proof-of-concepts, and code repositories.
+The Av2-series VMs can be deployed on a variety of hardware types and processors. Av2-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), the Intel® Xeon® Platinum 8272CL (Cascade Lake), the Intel® Xeon® 8171M 2.1 GHz (Skylake), the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors. Av2-series VMs have CPU performance and memory configurations best suited for entry level workloads like development and test. The size is throttled to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine. Some example use cases include development and test servers, low traffic web servers, small to medium databases, proof-of-concepts, and code repositories.
[ACU](acu.md): 100<br> [Premium Storage](premium-storage-performance.md): Not Supported <br>
virtual-machines Ddv4 Ddsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ddv4-ddsv4-series.md
Last updated 06/01/2020
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Ddv4 and Ddsv4-series runs on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, providing a better value proposition for most general-purpose workloads. It features an all core Turbo clock speed of 3.4 GHz, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). They also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). These new VM sizes will have 50% larger local storage, as well as better local disk IOPS for both read and write compared to the [Dv3/Dsv3](./dv3-dsv3-series.md) sizes with [Gen2 VMs](./generation-2.md).
+The Ddv4 and Ddsv4-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, providing a better value proposition for most general-purpose workloads. It features an all core Turbo clock speed of 3.4 GHz, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). They also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). These new VM sizes will have 50% larger local storage, as well as better local disk IOPS for both read and write compared to the [Dv3/Dsv3](./dv3-dsv3-series.md) sizes with [Gen2 VMs](./generation-2.md).
D-series use cases include enterprise-grade applications, relational databases, in-memory caching, and analytics. ## Ddv4-series
-Ddv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Ddv4-series offer a combination of vCPU, memory and temporary disk for most production workloads.
+Ddv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Ddv4-series offer a combination of vCPU, memory and temporary disk for most production workloads.
The new Ddv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB) and are designed for applications that benefit from low latency, high-speed local storage, such as applications that require fast reads/ writes to temp storage or that need temp storage for caches or temporary files. You can attach Standard SSDs and Standard HDDs storage to the Ddv4 VMs. Remote Data disk storage is billed separately from virtual machines.
The new Ddv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB) a
## Ddsv4-series
-Ddsv4-series run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Ddsv4-series offer a combination of vCPU, memory and temporary disk for most production workloads.
+Ddsv4-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Ddsv4-series offer a combination of vCPU, memory and temporary disk for most production workloads.
The new Ddsv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB) and are designed for applications that benefit from low latency, high-speed local storage, such as applications that require fast reads/ writes to temp storage or that need temp storage for caches or temporary files.
virtual-machines Dedicated Host Compute Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dedicated-host-compute-optimized-skus.md
The following packing configuration outlines the max packing of uniform VMs you
### Fsv2-Type3
-The Fsv2-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 86 vCPUs, and 504 GiB of RAM. The Fsv2-Type3 runs [Fsv2-series](fsv2-series.md) VMs.
+The Fsv2-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 84 vCPUs, and 504 GiB of RAM. The Fsv2-Type3 runs [Fsv2-series](fsv2-series.md) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Fsv2-Type3 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 52 | 86 | 504 GiB | F2s v2 | 32 |
+| 52 | 84 | 504 GiB | F2s v2 | 32 |
| | | | F4s v2 | 21 | | | | | F8s v2 | 10 | | | | | F16s v2 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
### Fsv2-Type4
-The Fsv2-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Fsv2-Type4 runs [Fsv2-series](fsv2-series.md) VMs.
+The Fsv2-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Fsv2-Type4 runs [Fsv2-series](fsv2-series.md) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Fsv2-Type4 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 119 | 768 GiB | F2s v2 | 32 |
-| | | | F4s v2 | 25 |
+| 64 | 96 | 768 GiB | F2s v2 | 32 |
+| | | | F4s v2 | 24 |
| | | | F8s v2 | 12 | | | | | F16s v2 | 6 | | | | | F32s v2 | 3 |
virtual-machines Dedicated Host General Purpose Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dedicated-host-general-purpose-skus.md
You can also mix multiple VM sizes on the Dasv4-Type1. The following are sample
- 20 D4asv4 + 8 D2asv4 ### Dasv4-Type2
-The Dasv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Dasv4-Type2 runs [Dasv4-series](dav4-dasv4-series.md#dasv4-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
+The Dasv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 110 vCPUs, and 768 GiB of RAM. The Dasv4-Type2 runs [Dasv4-series](dav4-dasv4-series.md#dasv4-series) VMs. Please refer to the VM size documentation to better understand specific VM performance information.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dasv4-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 112 | 768 GiB | D2as v4 | 32 |
+| 64 | 110 | 768 GiB | D2as v4 | 32 |
| | | | D4as v4 | 25 | | | | | D8as v4 | 12 | | | | | D16as v4 | 6 |
The following packing configuration outlines the max packing of uniform VMs you
| | | | D4ds v4 | 17 | | | | | D8ds v4 | 8 | | | | | D16ds v4 | 4 |
-| | | | D32ds v4 | 1 |
+| | | | D32ds v4 | 2 |
| | | | D48ds v4 | 1 | | | | | D64ds v4 | 1 |
You can also mix multiple VM sizes on the Ddsv4-Type1. The following are sample
- 10 D4dsv4 + 14 D2dsv4 ### Ddsv4-Type2
-The Ddsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Ddsv4-Type2 runs [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series) VMs.
+The Ddsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 76 vCPUs, and 768 GiB of RAM. The Ddsv4-Type2 runs [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Ddsv4-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 119 | 768 GiB | D2ds v4 | 32 |
+| 64 | 76 | 768 GiB | D2ds v4 | 32 |
| | | | D4ds v4 | 19 | | | | | D8ds v4 | 9 | | | | | D16ds v4 | 4 |
You can also mix multiple VM sizes on the Dsv4-Type1. The following are sample c
### Dsv4-Type2
-The Dsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Dsv4-Type2 runs [Dsv4-series](dv4-dsv4-series.md#dsv4-series) VMs.
+The Dsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Dsv4-Type2 runs [Dsv4-series](dv4-dsv4-series.md#dsv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv4-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 119 | 768 GiB | D2s v4 | 32 |
+| 64 | 96 | 768 GiB | D2s v4 | 32 |
| | | | D4s v4 | 25 | | | | | D8s v4 | 12 | | | | | D16s v4 | 6 |
The following packing configuration outlines the max packing of uniform VMs you
## Dsv3 ### Dsv3-Type1
-The Dsv3-Type1 is a Dedicated Host SKU utilizing the Intel® Broadwell (2.3 GHz Xeon® E5-2673 v4) processor. It offers 40 physical cores, 64 vCPUs, and 256 GiB of RAM. The Dsv3-Type1 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
+The Dsv3-Type1 is a Dedicated Host SKU utilizing the Intel® Broadwell (2.3 GHz Xeon® E5-2673 v4) processor. It offers 40 physical cores, 68 vCPUs, and 256 GiB of RAM. The Dsv3-Type1 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv3-Type1 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 40 | 64 | 256 GiB | D2s v3 | 32 |
-| | | | D4s v3 | 16 |
+| 40 | 68 | 256 GiB | D2s v3 | 32 |
+| | | | D4s v3 | 17 |
| | | | D8s v3 | 8 | | | | | D16s v3 | 4 | | | | | D32s v3 | 2 |
You can also mix multiple VM sizes on the Dsv3-Type1. The following are sample c
### Dsv3-Type2
-The Dsv3-Type2 is a Dedicated Host SKU utilizing the Intel® Skylake (2.1 GHz Xeon® Platinum 8171M) processor. It offers 48 physical cores, 76 vCPUs, and 504 GiB of RAM. The Dsv3-Type2 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
+The Dsv3-Type2 is a Dedicated Host SKU utilizing the Intel® Skylake (2.1 GHz Xeon® Platinum 8171M) processor. It offers 48 physical cores, 80 vCPUs, and 504 GiB of RAM. The Dsv3-Type2 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv3-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 48 | 76 | 504 GiB | D2s v3 | 32 |
-| | | | D4s v3 | 18 |
-| | | | D8s v3 | 9 |
-| | | | D16s v3 | 4 |
+| 48 | 80 | 504 GiB | D2s v3 | 32 |
+| | | | D4s v3 | 20 |
+| | | | D8s v3 | 10 |
+| | | | D16s v3 | 5 |
| | | | D32s v3 | 2 | | | | | D48s v3 | 1 | | | | | D64s v3 | 1 |
You can also mix multiple VM sizes on the Dsv3-Type2. The following are sample c
### Dsv3-Type3
-The Dsv3-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Dsv3-Type3 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
+The Dsv3-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 84 vCPUs, and 504 GiB of RAM. The Dsv3-Type3 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv3-Type3 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 52 | 80 | 504 GiB | D2s v3 | 32 |
-| | | | D4s v3 | 20 |
+| 52 | 84 | 504 GiB | D2s v3 | 32 |
+| | | | D4s v3 | 21 |
| | | | D8s v3 | 10 | | | | | D16s v3 | 5 | | | | | D32s v3 | 2 |
You can also mix multiple VM sizes on the Dsv3-Type3. The following are sample c
### Dsv3-Type4
-The Dsv3-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Dsv3-Type4 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
+The Dsv3-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Dsv3-Type4 runs [Dsv3-series](dv3-dsv3-series.md#dsv3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Dsv3-Type4 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 119 | 768 GiB | D2s v3 | 32 |
-| | | | D4s v3 | 25 |
+| 64 | 96 | 768 GiB | D2s v3 | 32 |
+| | | | D4s v3 | 24 |
| | | | D8s v3 | 12 | | | | | D16s v3 | 6 | | | | | D32s v3 | 3 |
The following packing configuration outlines the max packing of uniform VMs you
| Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 8 | 8 | 64 GiB | DC8 v2 | 1 |
+| 8 | 8 | 64 GiB | DC8 v2 | 1 |
## Next steps
virtual-machines Dedicated Host Memory Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dedicated-host-memory-optimized-skus.md
The following packing configuration outlines the max packing of uniform VMs you
| | | | E32ads v5 | 2 | | | | | E48ads v5 | 1 | | | | | E64ads v5 | 1 |
+| | | | E96ads v5 | 1 |
+ ## Easv5 ### Easv5-Type1
The following packing configuration outlines the max packing of uniform VMs you
| | | | E32as v5 | 2 | | | | | E48as v5 | 1 | | | | | E64as v5 | 1 |
+| | | | E96as v5 | 1 |
## Edsv5 ### Edsv5-Type1
The following packing configuration outlines the max packing of uniform VMs you
## Esv5 ### Esv5-Type1
-The Esv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Esv5-Type1 runs [Esv5-series](ev5-esv5-series.md#esv5-series) VMs.
+The Esv5-Type1 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 84 vCPUs, and 768 GiB of RAM. The Esv5-Type1 runs [Esv5-series](ev5-esv5-series.md#esv5-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv5-Type1 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 96 | 768 GiB | E2s v5 | 32 |
+| 64 | 84 | 768 GiB | E2s v5 | 32 |
| | | | E4s v5 | 21 | | | | | E8s v5 | 10 | | | | | E16s v5 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
### Easv4-Type2
-The Easv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 112 vCPUs, and 768 GiB of RAM. The Easv4-Type2 runs [Easv4-series](eav4-easv4-series.md#easv4-series) VMs.
+The Easv4-Type2 is a Dedicated Host SKU utilizing AMD's EPYCΓäó 7763v processor. It offers 64 physical cores, 96 vCPUs, and 768 GiB of RAM. The Easv4-Type2 runs [Easv4-series](eav4-easv4-series.md#easv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Easv4-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 112 | 768 GiB | E2as v4 | 32 |
+| 64 | 96 | 768 GiB | E2as v4 | 32 |
| | | | E4as v4 | 21 | | | | | E8as v4 | 10 | | | | | E16as v4 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
### Edsv4-Type2
-The Edsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Edsv4-Type2 runs [Edsv4-series](edv4-edsv4-series.md#edsv4-series) VMs.
+The Edsv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 76 vCPUs, and 768 GiB of RAM. The Edsv4-Type2 runs [Edsv4-series](edv4-edsv4-series.md#edsv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Edsv4-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--||-|-|
-| 64 | 119 | 768 GiB | E2ds v4 | 32 |
+| 64 | 76 | 768 GiB | E2ds v4 | 32 |
| | | | E4ds v4 | 19 | | | | | E8ds v4 | 9 | | | | | E16ds v4 | 4 |
The following packing configuration outlines the max packing of uniform VMs you
## Esv4 ### Esv4-Type1
-The Esv4-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Esv4-Type1 runs [Esv4-series](ev4-esv4-series.md#esv4-series) VMs.
+The Esv4-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 64 vCPUs, and 504 GiB of RAM. The Esv4-Type1 runs [Esv4-series](ev4-esv4-series.md#esv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv4-Type1 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 52 | 80 | 504 GiB | E2s v4 | 31 |
+| 52 | 64 | 504 GiB | E2s v4 | 31 |
| | | | E4s v4 | 15 | | | | | E8s v4 | 7 | | | | | E16s v4 | 3 |
The following packing configuration outlines the max packing of uniform VMs you
### Esv4-Type2
-The Esv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Esv4-Type2 runs [Esv4-series](ev4-esv4-series.md#esv4-series) VMs.
+The Esv4-Type2 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 84 vCPUs, and 768 GiB of RAM. The Esv4-Type2 runs [Esv4-series](ev4-esv4-series.md#esv4-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv4-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 119 | 768 GiB | E2s v4 | 32 |
+| 64 | 84 | 768 GiB | E2s v4 | 32 |
| | | | E4s v4 | 21 | | | | | E8s v4 | 10 | | | | | E16s v4 | 5 |
The following packing configuration outlines the max packing of uniform VMs you
### Esv3-Type2
-The Esv3-Type2 is a Dedicated Host SKU utilizing the Intel® Skylake (Xeon® 8171M) processor. It offers 48 physical cores, 78 vCPUs, and 504 GiB of RAM. The Esv3-Type2 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
+The Esv3-Type2 is a Dedicated Host SKU utilizing the Intel® Skylake (Xeon® 8171M) processor. It offers 48 physical cores, 64 vCPUs, and 504 GiB of RAM. The Esv3-Type2 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv3-Type2 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 48 | 76 | 504 GiB | E2s v3 | 31 |
+| 48 | 64 | 504 GiB | E2s v3 | 31 |
| | | | E4s v3 | 15 | | | | | E8s v3 | 7 | | | | | E16s v3 | 3 |
The following packing configuration outlines the max packing of uniform VMs you
### Esv3-Type3
-The Esv3-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 80 vCPUs, and 504 GiB of RAM. The Esv3-Type3 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
+The Esv3-Type3 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Platinum 8272CL) processor. It offers 52 physical cores, 64 vCPUs, and 504 GiB of RAM. The Esv3-Type3 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv3-Type3 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 52 | 80 | 504 GiB | E2s v3 | 31 |
+| 52 | 64 | 504 GiB | E2s v3 | 31 |
| | | | E4s v3 | 15 | | | | | E8s v3 | 7 | | | | | E16s v3 | 3 |
The following packing configuration outlines the max packing of uniform VMs you
### Esv3-Type4
-The Esv3-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 119 vCPUs, and 768 GiB of RAM. The Esv3-Type4 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
+The Esv3-Type4 is a Dedicated Host SKU utilizing the Intel® Ice Lake (Xeon® Platinum 8370C) processor. It offers 64 physical cores, 84 vCPUs, and 768 GiB of RAM. The Esv3-Type4 runs [Esv3-series](ev3-esv3-series.md#ev3-series) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Esv3-Type4 host. | Physical cores | Available vCPUs | Available RAM | VM Size | # VMs | |-|--|||-|
-| 64 | 119 | 768 GiB | E2s v3 | 32 |
+| 64 | 84 | 768 GiB | E2s v3 | 32 |
| | | | E4s v3 | 21 | | | | | E8s v3 | 10 | | | | | E16s v3 | 5 |
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-incremental-snapshots.md
az feature show --namespace Microsoft.Compute --name CreateOptionClone
### Restrictions -- Cross-region snapshot copy is currently only available in East US 2 and West Central US.
+- Cross-region snapshot copy is currently only available in Central US, East US, East US 2, Germany West central, North Central US, North Europe, South Central US, West Central US, West US, West US 2, West Europe, South India, Central India
- You must use version 2020-12-01 or newer of the Azure Compute Rest API. ### Get started
virtual-machines Dv2 Dsv2 Series Memory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv2-dsv2-series-memory.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Dv2 and Dsv2-series, a follow-on to the original D-series, features a more powerful CPU. DSv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors. The Dv2-series has the same memory and disk configurations as the D-series.
+Dv2 and Dsv2-series, a follow-on to the original D-series, features a more powerful CPU. DSv2-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), the Intel® Xeon® Platinum 8272CL (Cascade Lake), the Intel® Xeon® 8171M 2.1 GHz (Skylake), the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors. The Dv2-series has the same memory and disk configurations as the D-series.
## Dv2-series 11-15
-Dv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors.
+Dv2-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), the Intel® Xeon® Platinum 8272CL (Cascade Lake), the Intel® Xeon® 8171M 2.1 GHz (Skylake), the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors.
[ACU](acu.md): 210 - 250<br> [Premium Storage](premium-storage-performance.md): Not Supported<br>
Dv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® X
## DSv2-series 11-15
-DSv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors.
+DSv2-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), the Intel® Xeon® Platinum 8272CL (Cascade Lake), the Intel® Xeon® 8171M 2.1 GHz (Skylake), the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors.
[ACU](acu.md): 210 - 250 <sup>1</sup><br> [Premium Storage](premium-storage-performance.md): Supported<br>
More information on Disks Types : [Disk Types](./disks-types.md#ultra-disks)
## Next steps
-Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Dv2 Dsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv2-dsv2-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Dv2 and DSv2-series, a follow-on to the original D-series, feature a more powerful CPU and optimal CPU-to-memory configuration making them suitable for most production workloads. The Dv2-series is about 35% faster than the D-series. Dv2-series run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with the Intel Turbo Boost Technology 2.0. The Dv2-series has the same memory and disk configurations as the D-series.
+The Dv2 and DSv2-series, a follow-on to the original D-series, feature a more powerful CPU and optimal CPU-to-memory configuration making them suitable for most production workloads. The Dv2-series is about 35% faster than the D-series. Dv2-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with the Intel Turbo Boost Technology 2.0. The Dv2-series has the same memory and disk configurations as the D-series.
## Dv2-series
-Dv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake) or the the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with Intel Turbo Boost Technology 2.0.
+Dv2-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), or the the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with Intel Turbo Boost Technology 2.0.
[ACU](acu.md): 210-250<br> [Premium Storage](premium-storage-performance.md): Not Supported<br>
Dv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® X
## DSv2-series
-DSv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake) or the the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with Intel Turbo Boost Technology 2.0 and use premium storage.
+DSv2-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake) or the the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with Intel Turbo Boost Technology 2.0 and use premium storage.
[ACU](acu.md): 210-250<br> [Premium Storage](premium-storage-performance.md): Supported<br>
virtual-machines Dv3 Dsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv3-dsv3-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Dv3-series run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors in a hyper-threaded configuration, providing a better value proposition for most general purpose workloads. Memory has been expanded (from ~3.5 GiB/vCPU to 4 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the move to hyperthreading. The Dv3-series no longer has the high memory VM sizes of the D/Dv2-series, those have been moved to the memory optimized [Ev3 and Esv3-series](ev3-esv3-series.md).
+The Dv3-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors in a hyper-threaded configuration, providing a better value proposition for most general purpose workloads. Memory has been expanded (from ~3.5 GiB/vCPU to 4 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the move to hyperthreading. The Dv3-series no longer has the high memory VM sizes of the D/Dv2-series, those have been moved to the memory optimized [Ev3 and Esv3-series](ev3-esv3-series.md).
Example D-series use cases include enterprise-grade applications, relational databases, in-memory caching, and analytics. ## Dv3-series
-Dv3-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html). The Dv3-series sizes offer a combination of vCPU, memory, and temporary storage for most production workloads.
+Dv3-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html). The Dv3-series sizes offer a combination of vCPU, memory, and temporary storage for most production workloads.
Data disk storage is billed separately from virtual machines. To use premium storage disks, use the Dsv3 sizes. The pricing and billing meters for Dsv3 sizes are the same as Dv3-series.
Dv3-series VMs feature Intel® Hyper-Threading Technology.
## Dsv3-series
-Dsv3-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html) and use premium storage. The Dsv3-series sizes offer a combination of vCPU, memory, and temporary storage for most production workloads.
+Dsv3-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html) and use premium storage. The Dsv3-series sizes offer a combination of vCPU, memory, and temporary storage for most production workloads.
Dsv3-series VMs feature Intel® Hyper-Threading Technology.
virtual-machines Dv4 Dsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dv4-dsv4-series.md
Last updated 06/08/2020
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Dv4 and Dsv4-series runs on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, providing a better value proposition for most general-purpose workloads. It features an all core Turbo clock speed of 3.4 GHz, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). They also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html).
+The Dv4 and Dsv4-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, providing a better value proposition for most general-purpose workloads. It features an all core Turbo clock speed of 3.4 GHz, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). They also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html).
> [!NOTE] > For frequently asked questions, see [Azure VM sizes with no local temp disk](azure-vms-no-temp-disk.yml). ## Dv4-series
-Dv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Dv4-series sizes offer a combination of vCPU, memory and remote storage options for most production workloads. Dv4-series VMs feature [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html).
+Dv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Dv4-series sizes offer a combination of vCPU, memory and remote storage options for most production workloads. Dv4-series VMs feature [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html).
Remote Data disk storage is billed separately from virtual machines. To use premium storage disks, use the Dsv4 sizes. The pricing and billing meters for Dsv4 sizes are the same as Dv4-series.
Remote Data disk storage is billed separately from virtual machines. To use prem
## Dsv4-series
-Dsv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Dv4-series sizes offer a combination of vCPU, memory and remote storage options for most production workloads. Dsv4-series VMs feature [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html). Remote Data disk storage is billed separately from virtual machines.
+Dsv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Dv4-series sizes offer a combination of vCPU, memory and remote storage options for most production workloads. Dsv4-series VMs feature [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html). Remote Data disk storage is billed separately from virtual machines.
[ACU](acu.md): 195-210<br> [Premium Storage](premium-storage-performance.md): Supported<br>
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/edv4-edsv4-series.md
Last updated 10/20/2021
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Edv4 and Edsv4-series runs on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, and are ideal for various memory-intensive enterprise applications and feature up to 504 GiB of RAM, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). They also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). These new VM sizes will have 50% larger local storage, as well as better local disk IOPS for both read and write compared to the [Ev3/Esv3](./ev3-esv3-series.md) sizes with [Gen2 VMs](./generation-2.md). It features an all core Turbo clock speed of 3.4 GHz.
+The Edv4 and Edsv4-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, and are ideal for various memory-intensive enterprise applications and feature up to 504 GiB of RAM, [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel&reg; AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). They also support [Intel&reg; Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html). These new VM sizes will have 50% larger local storage, as well as better local disk IOPS for both read and write compared to the [Ev3/Esv3](./ev3-esv3-series.md) sizes with [Gen2 VMs](./generation-2.md). It features an all core Turbo clock speed of 3.4 GHz.
## Edv4-series
-Edv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors. The Edv4 virtual machine sizes feature up to 504 GiB of RAM, in addition to fast and large local SSD storage (up to 2,400 GiB). These virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage. You can attach Standard SSDs and Standard HDDs disk storage to the Edv4 VMs.
+Edv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors. The Edv4 virtual machine sizes feature up to 504 GiB of RAM, in addition to fast and large local SSD storage (up to 2,400 GiB). These virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage. You can attach Standard SSDs and Standard HDDs disk storage to the Edv4 VMs.
[ACU](acu.md): 195 - 210<br> [Premium Storage](premium-storage-performance.md): Not Supported<br>
Edv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake)
## Edsv4-series
-Edsv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors. The Edsv4 virtual machine sizes feature up to 504 GiB of RAM, in addition to fast and large local SSD storage (up to 2,400 GiB). These virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
+Edsv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors. The Edsv4 virtual machine sizes feature up to 504 GiB of RAM, in addition to fast and large local SSD storage (up to 2,400 GiB). These virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
[ACU](acu.md): 195-210<br> [Premium Storage](premium-storage-performance.md): Supported<br>
virtual-machines Ev3 Esv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ev3-esv3-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Ev3 and Esv3-series run on Intel® Xeon® Platinum 8272CL (Cascade Lake), or Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) processor in a hyper-threaded configuration, providing a better value proposition for most general purpose workloads, and bringing the Ev3 into alignment with the general purpose VMs of most other clouds. Memory has been expanded (from 7 GiB/vCPU to 8 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the move to hyperthreading. The Ev3 is the follow up to the high memory VM sizes of the D/Dv2 families.
+The Ev3 and Esv3-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) processor in a hyper-threaded configuration, providing a better value proposition for most general purpose workloads, and bringing the Ev3 into alignment with the general purpose VMs of most other clouds. Memory has been expanded (from 7 GiB/vCPU to 8 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the move to hyperthreading. The Ev3 is the follow up to the high memory VM sizes of the D/Dv2 families.
## Ev3-series
-Ev3-series instances run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) processors, and feature Intel Turbo Boost Technology 2.0. Ev3-series instances are ideal for memory-intensive enterprise applications.
+Ev3-series instances run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) processors, and feature Intel Turbo Boost Technology 2.0. Ev3-series instances are ideal for memory-intensive enterprise applications.
Data disk storage is billed separately from virtual machines. To use premium storage disks, use the ESv3 sizes. The pricing and billing meters for ESv3 sizes are the same as Ev3-series.
Ev3-series VM’s feature Intel® Hyper-Threading Technology.
## Esv3-series
-Esv3-series instances run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) processor, feature Intel Turbo Boost Technology 2.0 and use premium storage. Esv3-series instances are ideal for memory-intensive enterprise applications.
+Esv3-series instances run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) processor, feature Intel Turbo Boost Technology 2.0 and use premium storage. Esv3-series instances are ideal for memory-intensive enterprise applications.
Esv3-series VM’s feature Intel® Hyper-Threading Technology.
virtual-machines Ev4 Esv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ev4-esv4-series.md
Last updated 6/8/2020
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Ev4 and Esv4-series runs on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, are ideal for various memory-intensive enterprise applications and feature up to 504GiB of RAM. It features an all core Turbo clock speed of 3.4 GHz.
+The Ev4 and Esv4-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, are ideal for various memory-intensive enterprise applications and feature up to 504GiB of RAM. It features an all core Turbo clock speed of 3.4 GHz.
> [!NOTE] > For frequently asked questions, refer to [Azure VM sizes with no local temp disk](azure-vms-no-temp-disk.yml). ## Ev4-series
-Ev4-series sizes run on the Intel Xeon&reg; Platinum 8272CL (Cascade Lake). The Ev4-series instances are ideal for memory-intensive enterprise applications. Ev4-series VMs feature Intel&reg; Hyper-Threading Technology.
+Ev4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel Xeon&reg; Platinum 8272CL (Cascade Lake). The Ev4-series instances are ideal for memory-intensive enterprise applications. Ev4-series VMs feature Intel&reg; Hyper-Threading Technology.
Remote Data disk storage is billed separately from virtual machines. To use premium storage disks, use the Esv4 sizes. The pricing and billing meters for Esv4 sizes are the same as Ev4-series.
Remote Data disk storage is billed separately from virtual machines. To use prem
## Esv4-series
-Esv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Esv4-series instances are ideal for memory-intensive enterprise applications. Evs4-series VMs feature Intel&reg; Hyper-Threading Technology. Remote Data disk storage is billed separately from virtual machines.
+Esv4-series sizes run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake). The Esv4-series instances are ideal for memory-intensive enterprise applications. Evs4-series VMs feature Intel&reg; Hyper-Threading Technology. Remote Data disk storage is billed separately from virtual machines.
[ACU](acu.md): 195-210<br> [Premium Storage](premium-storage-performance.md): Supported<br>
virtual-machines Fsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/fsv2-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The Fsv2-series runs on the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors and Intel® Xeon® Platinum 8168 (Skylake) processors. It features a sustained all core Turbo clock speed of 3.4 GHz and a maximum single-core turbo frequency of 3.7 GHz. Intel® AVX-512 instructions are new on Intel Scalable Processors. These instructions provide up to a 2X performance boost to vector processing workloads on both single and double precision floating point operations. In other words, they're really fast for any computational workload.
+The Fsv2-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors, or the Intel® Xeon® Platinum 8168 (Skylake) processors. It features a sustained all core Turbo clock speed of 3.4 GHz and a maximum single-core turbo frequency of 3.7 GHz. Intel® AVX-512 instructions are new on Intel Scalable Processors. These instructions provide up to a 2X performance boost to vector processing workloads on both single and double precision floating point operations. In other words, they're really fast for any computational workload.
Fsv2-series VMs feature Intel® Hyper-Threading Technology.
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/image-builder-json.md
description: Learn how to create a template to use with Azure Image Builder.
Previously updated : 05/24/2021 Last updated : 01/10/2022
This is the basic template format:
```json { "type": "Microsoft.VirtualMachineImages/imageTemplates",
- "apiVersion": "2020-02-14",
+ "apiVersion": "2021-10-01",
"location": "<region>", "tags": { "<name>": "<value>",
This is the basic template format:
## Type and API version
-The `type` is the resource type, which must be `"Microsoft.VirtualMachineImages/imageTemplates"`. The `apiVersion` will change over time as the API changes, but should be `"2020-02-14"` for now.
+The `type` is the resource type, which must be `"Microsoft.VirtualMachineImages/imageTemplates"`. The `apiVersion` will change over time as the API changes, but should be `"2021-10-01"` for now.
```json "type": "Microsoft.VirtualMachineImages/imageTemplates",
- "apiVersion": "2020-02-14",
+ "apiVersion": "2021-10-01",
``` ## Location
runOutputName=<runOutputName>
az resource show \ --ids "/subscriptions/$subscriptionID/resourcegroups/$imageResourceGroup/providers/Microsoft.VirtualMachineImages/imageTemplates/ImageTemplateLinuxRHEL77/runOutputs/$runOutputName" \
- --api-version=2020-02-14
+ --api-version=2021-10-01
``` Output:
az resource show \
To start a build, you need to invoke 'Run' on the Image Template resource, examples of `run` commands: ```PowerShell
-Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2020-02-14" -Action Run -Force
+Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2021-10-01" -Action Run -Force
```
az resource invoke-action \
### Cancelling an Image Build If you are running an image build that you believe is incorrect, waiting for user input, or you feel will never complete successfully, then you can cancel the build.
-The build can be cancelled any time. If the distribution phase has started you can still cancel, but you will need to clean up any images that may not be completed. The cancel command does not wait for cancel to complete, please monitor `lastrunstatus.runstate` for cancelling progress, using these status [commands](image-builder-troubleshoot.md#customization-log).
+The build can be canceled any time. If the distribution phase has started you can still cancel, but you will need to clean up any images that may not be completed. The cancel command does not wait for cancel to complete, please monitor `lastrunstatus.runstate` for canceling progress, using these status [commands](image-builder-troubleshoot.md#customization-log).
Examples of `cancel` commands: ```powerShell
-Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2020-02-14" -Action Cancel -Force
+Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2021-10-01" -Action Cancel -Force
``` ```bash
virtual-machines Sizes B Series Burstable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/sizes-b-series-burstable.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The B-series VMs can be deployed on a variety of hardware types and processors, so competitive bandwidth allocation is provided. B-series VMs are ideal for workloads that do not need the full performance of the CPU continuously, like web servers, proof of concepts, small databases and development build environments. These workloads typically have burstable performance requirements. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the virtual machine. The B-series provides you with the ability to purchase a VM size with baseline performance that can build up credits when it is using less than its baseline. When the VM has accumulated credits, the VM can burst above the baseline using up to 100% of the vCPU when your application requires higher CPU performance.
+The B-series VMs can be deployed on a variety of hardware types and processors, so competitive bandwidth allocation is provided. B-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), the Intel® Xeon® Platinum 8272CL (Cascade Lake), the Intel® Xeon® 8171M 2.1 GHz (Skylake), the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors. B-series VMs are ideal for workloads that do not need the full performance of the CPU continuously, like web servers, proof of concepts, small databases and development build environments. These workloads typically have burstable performance requirements. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the virtual machine. The B-series provides you with the ability to purchase a VM size with baseline performance that can build up credits when it is using less than its baseline. When the VM has accumulated credits, the VM can burst above the baseline using up to 100% of the vCPU when your application requires higher CPU performance.
The B-series comes in the following VM sizes:
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/hb-hc-known-issues.md
Previously updated : 12/08/2021 Last updated : 1/10/2022
You may verify the values of the maximum number of queue-pairs when the issue is
max_qp: 4096 ```
-## Accelerated Networking on HB, HC, HBv2, and NDv2
+## Accelerated Networking on HB, HC, HBv2, HBv3 and NDv2
-[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](../../hb-series.md), [HC](../../hc-series.md), [HBv2](../../hbv2-series.md), and [NDv2](../../ndv2-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0) and this may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X). The simplest solution currently may be to use the latest HPC-X on the CentOS-HPC VM images or disable Accelerated Networking if not required.
+[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](../../hb-series.md), [HC](../../hc-series.md), [HBv2](../../hbv2-series.md), [HBv3](../../hbv3-series.md) and [NDv2](../../ndv2-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0) and this may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X). The simplest solution currently may be to use the latest HPC-X on the CentOS-HPC VM images or disable Accelerated Networking if not required.
More details on this are available on this [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-and-hbv2/ba-p/2067965) with instructions on how to address any observed issues. ## InfiniBand driver installation on non-SR-IOV VMs
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/setup-mpi.md
Previously updated : 04/16/2021 Last updated : 01/10/2022
make -j 8 && make install
``` > [!NOTE]
-> Recent builds of UCX have fixed an [issue](https://github.com/openucx/ucx/pull/5965) whereby the right InfiniBand interface is chosen in the presence of multiple NIC interfaces. For more information, see [Troubleshooting known issues with HPC and GPU VMs](hb-hc-known-issues.md#accelerated-networking-on-hb-hc-hbv2-and-ndv2) on running MPI over InfiniBand when Accelerated Networking is enabled on the VM.
+> Recent builds of UCX have fixed an [issue](https://github.com/openucx/ucx/pull/5965) whereby the right InfiniBand interface is chosen in the presence of multiple NIC interfaces. For more information, see [Troubleshooting known issues with HPC and GPU VMs](hb-hc-known-issues.md#accelerated-networking-on-hb-hc-hbv2-hbv3-and-ndv2) on running MPI over InfiniBand when Accelerated Networking is enabled on the VM.
## HPC-X
virtual-network Virtual Network Troubleshoot Cannot Delete Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-troubleshoot-cannot-delete-vnet.md
na
Last updated 10/31/2018 +
You might receive errors when you try to delete a virtual network in Microsoft A
5. [Check whether the virtual network is connected to other resource](#check-whether-the-virtual-network-is-connected-to-other-resource). 6. [Check whether a virtual machine is still running in the virtual network](#check-whether-a-virtual-machine-is-still-running-in-the-virtual-network). 7. [Check whether the virtual network is stuck in migration](#check-whether-the-virtual-network-is-stuck-in-migration).
+8. [Check whether the virtual network was used by a web app for VNet integration](#check-whether-the-virtual-network-was-used-by-a-web-app-for-vnet-integration).
## Troubleshooting steps
If the virtual network is stuck in a migration state, it cannot be deleted. Run
Move-AzureVirtualNetwork -VirtualNetworkName "Name" -Abort ```
+### Check whether the virtual network was used by a web app for VNet integration
+
+If the virtual network was integrated with a web app in the past, then the web app was deleted without disconnecting the VNet integration, see [Deleting the App Service plan or web app before disconnecting the VNet integration](https://github.com/MicrosoftDocs/azure-docs/blob/046310ca15df6c82612b11971b9481b98125dd64/includes/app-service-web-vnet-troubleshooting.md).
+ ## Next steps - [Azure Virtual Network](virtual-networks-overview.md)-- [Azure Virtual Network frequently asked questions (FAQ)](virtual-networks-faq.md)
+- [Azure Virtual Network frequently asked questions (FAQ)](virtual-networks-faq.md)
virtual-wan Quickstart Any To Any Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/quickstart-any-to-any-template.md
Previously updated : 02/02/2021 Last updated : 01/11/2022
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* Public key certificate data is required for this configuration. Sample data is provided in the article. However, the sample data is provided only to satisfy the template requirements in order to create a P2S gateway. After the template completes and the resources are deployed, you must update this field with your own certificate data in order for the configuration to work. See [User VPN certificates](certificates-point-to-site.md#cer).
+* Public key certificate data is required for this configuration. Sample data is provided in the article. However, the sample data is provided only to satisfy the template requirements in order to create a P2S gateway. After the template completes and the resources are deployed, you must update this field with your own certificate data in order for the configuration to work. See [Generate and export certificates](certificates-point-to-site.md#cer) for information and steps.
## <a name="review"></a>Review the template
Multiple Azure resources are defined in the template:
* [**Microsoft.Network/expressroutegateways**](/azure/templates/microsoft.network/expressroutegateways) * [**Microsoft.Network/vpnserverconfigurations**](/azure/templates/microsoft.network/vpnserverconfigurations)
->[!NOTE]
+> [!NOTE]
> This ARM template doesn't create the customer-side resources required for hybrid connectivity. After you deploy the template, you still need to create and configure the P2S VPN clients, the VPN branches (Local Sites), and connect the ExpressRoute circuits.
->
+ To find more templates, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
To deploy this template properly, you must use **Deploy to Azure** button in the
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.network%2fvirtual-wan-with-all-gateways%2fazuredeploy.json) 1. To view the template, click **Edit template**. On this page, you can adjust some of the values such as address space or the name of certain resources. **Save** to save your changes, or **Discard**.
-1. On the template page, enter the values. For this template, the P2S public certificate data is required. If you are using this article as an exercise, you can use the following data from this .cer file as sample data for both hubs. Once the template runs and deployment is complete, in order to use the P2S configuration, you must replace this information with the public key [certificate data](certificates-point-to-site.md#cer) for your own deployment.
+1. On the template page, enter the values. For this template, the P2S public certificate data is required. If you are using this article as an exercise, you can use the following example data from this .cer file as sample data for both hubs. Once the template runs and deployment is complete, in order to use the P2S configuration, you must replace this information with the public key certificate data for your own deployment. For more information, see [Generate and export certificates](certificates-point-to-site.md#cer).
+
+ > [!NOTE]
+ > The certificate data in example below is supplied for demonstration purposes only. You must replace this information with the public key [certificate data](certificates-point-to-site.md#cer) for your own deployment if you want this configuration to work properly.
+ >
```certificate-data
- MIIC5zCCAc+gAwIBAgIQGxd3Av1q6LJDZ71e3TzqcTANBgkqhkiG9w0BAQsFADAW
- MRQwEgYDVQQDDAtQMlNSb290Q2VydDAeFw0yMDExMDkyMjMxNTVaFw0yMTExMDky
- MjUxNTVaMBYxFDASBgNVBAMMC1AyU1Jvb3RDZXJ0MIIBIjANBgkqhkiG9w0BAQEF
- AAOCAQ8AMIIBCgKCAQEA33fFra/E0YmGuXLKmYcdvjsYpKwQmw8DjjDkbwhE9jcc
- Dp50e7F1P6Rxo1T6Hm3dIhEji+0QkP4Ie0XPpw0eW77+RWUiG9XJxGqtJ3Q4tyRy
- vBfsHORcqMlpV3VZOXIxrk+L/1sSm2xAc2QGuOqKaDNNoKmjrSGNVAeQHigxbTQg
- zCcyeuhFxHxAaxpW0bslK2hEZ9PhuAe22c2SHht6fOIDeXkadzqTFeV8wEZdltLr
- 6Per0krxf7N2hFo5Cfz0KgWlvgdKLL7dUc9cjHo6b6BL2pNbLh8YofwHQOQbwt6H
- miAkEnx1EJ5N8AWuruUTByR2jcWyCnEAUSH41+nk4QIDAQABozEwLzAOBgNVHQ8B
- Af8EBAMCAgQwHQYDVR0OBBYEFJMgnJSYHH5AJ+9XB11usKRwjbjNMA0GCSqGSIb3
- DQEBCwUAA4IBAQBOy8Z5FBd/nvgDcjvAwNCw9h5RHzgtgQqDP0qUjEqeQv3ALeC+
- k/F2Tz0OWiPEzX5N+MMrf/jiYsL2exXuaPWCF5U9fu8bvs89GabHma8MGU3Qua2x
- Imvt0whWExQMjoyU8SNUi2S13fnRie9ZlSwNh8B/OIUUEtVhQsd4OfuZZFVH4xGp
- ibJMSMe5JBbZJC2tCdSdTLYfYJqrLkVuTjynXOjmz2JXfwnDNqEMdIMMjXzlNavR
- J8SNtAoptMOK5vAvlySg4LYtFyXkl0W0vLKIbbHf+2UszuSCijTUa3o/Y1FoYSfi
- eJH431YTnVLuwdd6fXkXFBrXDhjNsU866+hE
+ MIIC9zCCAd+gAwIBAgIQOn0lVXm3E5hH/A7CdSuPyDANBgkqhkiG9w0BAQsFADAe
+ MRwwGgYDVQQDDBNEZW1vUm9vdENlcnRpZmljYXRlMB4XDTIyMDExMTE5NDgwOFoX
+ DTMyMDExMTE5NTgwOVowHjEcMBoGA1UEAwwTRGVtb1Jvb3RDZXJ0aWZpY2F0ZTCC
+ ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM3m0yqbpV46r6D8pOjODw1E
+ O5QBf9kynypwRy0yrgj+6j1YzVogYQgBFHGgg1OszoAWorvN1KmuqOvdqR5Jtiuv
+ A3p8dfsWVZlkthTX9MaWQfskCThE+NucphalFgEOcpdJpN9kt+n1IMgbqI0metcW
+ lCyOkUke13jcNkYEd5oRi053yEWUOSfNoDvxmbwrGdtpPo8VH+7bZaNB8mUfxUjO
+ Hg6cv+BV910q0c+O6QWj5B5W+tJGDTxwuokyI94Fsb9FG6wxyZGSGX0uTBiuUC7V
+ Uf9FZur9HTfofkiy6QX2+6j0iQfqv7jM9NOnAzhUT+l+2l+6glEbkA2R3vH5wZ0C
+ AwEAAaMxMC8wDgYDVR0PAQH/BAQDAgIEMB0GA1UdDgQWBBQhyYPrM242o1FzArus
+ 77YlfhwkUzANBgkqhkiG9w0BAQsFAAOCAQEAL0wMThonNJ6dPRlbopqbuGLttDnX
+ OnpKLrv6d8kl6y8z4orYUi1T7Q3wjlMwVoHgqc8r7DMWroWG8mFlCyVdUYH9oYQS
+ m60v1fltvRxtFZiB3jzAMOcQsqr+v6QlAkr4RF7f7JtuLxwUCvVlF+rrQOAu9pu7
+ Kh180o9a79CgrA67DTSYP4wI1YRKglWK8eAxEkAfHTXwC/MJmf3XMMyb3cBWiirl
+ FLlDgEi4Jb14vd3diBg51df8WbW/+jmoNIbrWkpLhL27sSx6rgN/2NUYzdA4MWqp
+ Odrcs3wQsYovibqHiQUFHc24bvlcKiEpL535nHrSJR6PITm3Wh83yQ02mQ==
``` 1. When you have finished entering values, select **Review + create**.
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
description: This page provides information on web application firewall CRS rule
Previously updated : 01/10/2022 Last updated : 01/11/2022
Application Gateway web application firewall (WAF) protects web applications fro
The Application Gateway WAF comes pre-configured with CRS 3.1 by default. But you can choose to use CRS 3.2, 3.0, or 2.2.9 instead.
-CRS 3.2 (public preview) offers a new engine and new rule sets defending against Java infections, an initial set of file upload checks, fixed false positives, and more.
+CRS 3.2 (preview) offers a new engine and new rule sets defending against Java infections, an initial set of file upload checks, fixed false positives, and more.
CRS 3.1 offers reduced false positives compared with CRS 3.0 and 2.2.9. You can also [customize rules to suit your needs](application-gateway-customize-waf-rules-portal.md).