Updates from: 01/12/2022 02:07:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/access-tokens.md
If the **response_type** parameter in an `/authorize` request includes `token`,
## Request a token
-To request an access token, you need an authorization code. Below is an example of a request to the `/authorize` endpoint for an authorization code. Custom domains are not supported for use with access tokens. Use your tenant-name.onmicrosoft.com domain in the request URL.
+To request an access token, you need an authorization code. Below is an example of a request to the `/authorize` endpoint for an authorization code.
In the following example, you replace these values in the query string: -- `<tenant-name>` - The name of your Azure AD B2C tenant.
+- `<tenant-name>` - The name of your [Azure AD B2C tenant](tenant-management.md#get-your-tenant-name). If you're using a custom domain, replace `tenant-name.b2clogin.com` with your domain, such as `contoso.com`.
- `<policy-name>` - The name of your custom policy or user flow. - `<application-ID>` - The application identifier of the web application that you registered to support the user flow. - `<application-ID-URI>` - The application identifier URI that you set under **Expose an API** blade of the client application.
active-directory-b2c Claim Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/claim-resolver-overview.md
Previously updated : 12/12/2021 Last updated : 1/11/2022
The following table lists the [OAuth2 identity provider](oauth2-technical-profil
| Claim | Description | Example | | -- | -- | --|
-| {oauth2:access_token} | The access token. | N/A |
-| {oauth2:refresh_token} | The refresh token. | N/A |
+| {oauth2:access_token} | The OAuth2 identity provider access token. The `access_token` attribute. | `eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1Ni...` |
+| {oauth2:token_type} | The type of the access token. The `token_type` attribute. | Bearer |
+| {oauth2:expires_in} | The length of time that the access token is valid in seconds. The `expires_in` attribute. The output claim [DataType](claimsschema.md#datatype) must be `int` or `long`. | 960000 |
+| {oauth2:refresh_token} | The OAuth2 identity provider refresh token. The `refresh_token` attribute. | `eyJraWQiOiJacW9pQlp2TW5pYVc2MUY...` |
+
+To use the OAuth2 identity provider claim resolvers, set the output claim's `PartnerClaimType` attribute to the claim resolver. The following example demonstrates how the get the external identity provider claims:
+
+```xml
+<ClaimsProvider>
+ <DisplayName>Contoso</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="Contoso-OAUTH">
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="identityProviderAccessToken" PartnerClaimType="{oauth2:access_token}" />
+ <OutputClaim ClaimTypeReferenceId="identityProviderAccessTokenType" PartnerClaimType="{oauth2:token_type}" />
+ <OutputClaim ClaimTypeReferenceId="identityProviderAccessTokenExpiresIn" PartnerClaimType="{oauth2:expires_in}" />
+ <OutputClaim ClaimTypeReferenceId="identityProviderRefreshToken" PartnerClaimType="{oauth2:refresh_token}" />
+ </OutputClaims>
+ ...
+ </TechnicalProfile>
+ </TechnicalProfiles>
+</ClaimsProvider>
+```
+ ## Using claim resolvers
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
Previously updated : 09/20/2021 Last updated : 01/11/2022
The **OutputClaimsTransformations** element may contain a collection of **Output
| IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | |SingleLogoutEnabled| No| Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-behavior.md#sign-out). Possible values: `true` (default), or `false`.| |ForceAuthN| No| Passes the ForceAuthN value in the SAML authentication request to determine if the external SAML IDP will be forced to prompt the user for authentication. By default, Azure AD B2C sets the ForceAuthN value to false on initial login. If the session is then reset (for example by using the `prompt=login` in OIDC) then the ForceAuthN value will be set to `true`. Setting the metadata item as shown below will force the value for all requests to the external IDP. Possible values: `true` or `false`.|
+|ProviderName| No| Passes the ProviderName value in the SAML authentication request.|
## Cryptographic keys
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Returns a date/time string representing a date to which a specified time interva
| **value** |Required | Number | The number of units you want to add. It can be positive (to get dates in the future) or negative (to get dates in the past). | | **dateTime** |Required | DateTime | DateTime representing date to which the interval is added. |
+When passing a date string as input use [CDate](#cdate) function to wrap the datetime string. To get system time in UTC use the [Now](#now) function.
+ The **interval** string must have one of the following values: * yyyy Year * m Month
The **interval** string must have one of the following values:
* n Minute * s Second
-**Example 1: Add 7 days to hire date**
+**Example 1: Generate a date value based on incoming StatusHireDate from Workday** <br>
`DateAdd("d", 7, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 3/23/2012 7:00:00 AM
-
-**Example 2: Get a date ten days prior to hire date**
-`DateAdd("d", -10, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 3/6/2012 7:00:00 AM
-
-**Example 3: Add two weeks to hire date**
-`DateAdd("ww", 2, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 3/30/2012 7:00:00 AM
-
-**Example 4: Add ten months to hire date**
-`DateAdd("m", 10, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 1/16/2013 7:00:00 AM
-
-**Example 5: Add two years to hire date**
-`DateAdd("yyyy", 2, CDate([StatusHireDate]))`
-* **INPUT** (StatusHireDate): 2012-03-16-07:00
-* **OUTPUT**: 3/16/2014 7:00:00 AM
+
+| Example | interval | value | dateTime (value of variable StatusHireDate) | output |
+| | | | | |
+| Add 7 days to hire date | "d" | 7 | 2012-03-16-07:00 | 3/23/2012 7:00:00 AM |
+| Get a date ten days prior to hire date | "d" | -10 | 2012-03-16-07:00 | 3/6/2012 7:00:00 AM |
+| Add two weeks to hire date | "ww" | 2 | 2012-03-16-07:00 | 3/30/2012 7:00:00 AM |
+| Add ten months to hire date | "m" | 10 | 2012-03-16-07:00 | 1/16/2013 7:00:00 AM |
+| Add two years to hire date | "yyyy" | 10 | 2012-03-16-07:00 | 3/16/2014 7:00:00 AM |
+ ### DateDiff **Function:**
This function uses the *interval* parameter to return a number that indicates th
| **date1** |Required | DateTime | DateTime representing a valid date. | | **date2** |Required | DateTime | DateTime representing a valid date. |
+When passing a date string as input use [CDate](#cdate) function to wrap the datetime string. To get system time in UTC use the [Now](#now) function.
+ The **interval** string must have one of the following values: * yyyy Year * m Month
The IIF function returns one of a set of possible values based on a specified co
| **valueIfTrue** |Required |Variable or String | If the condition evaluates to true, the returned value. | | **valueIfFalse** |Required |Variable or String |If the condition evaluates to false, the returned value.|
-**Example:**
+The following comparison operators can be used in the *condition*:
+* Equal to (=) and not equal to (<>)
+* Greater than (>) and greater than equal to (>=)
+* Less than (<) and less than equal to (<=)
+
+**Example:** Set the target attribute value to source country attribute if country="USA", else set target attribute value to source department attribute.
`IIF([country]="USA",[country],[department])`
+#### Known limitations and workarounds for IIF function
+* The IIF function currently does not support AND and OR logical operators.
+* To implement AND logic, use nested IIF statement chained along the *trueValue* path.
+ Example: If country="USA" and state="CA", return value "True", else return "False".
+ `IIF([country]="USA",IIF([state]="CA","True","False"),"False")`
+* To implement OR logic, use nested IIF statement chained along the *falseValue* path.
+ Example: If country="USA" or state="CA", return value "True", else return "False".
+ `IIF([country]="USA","True",IIF([state]="CA","True","False"))`
+* If the source attribute used within the IIF function is empty or null, the condition check fails.
+ * Unsupported IIF expression examples:
+ * `IIF([country]="","Other",[country])`
+ * `IIF(IsNullOrEmpty([country]),"Other",[country])`
+ * `IIF(IsPresent([country]),[country],"Other")`
+ * Recommended workaround: Use the [Switch](#switch) function to check for empty/null values. Example: If country attribute is empty, set value "Other". If it is present, pass the country attribute value to target attribute.
+ * `Switch([country],[country],"","Other")`
+<br>
### InStr **Function:**
active-directory Howto Mfa Reporting Datacollection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-reporting-datacollection.md
Title: Azure AD MFA user data collection - Azure Active Directory
-description: What information is used to help authenticate users by Azure AD Multi-Factor Authentication?
+ Title: Azure AD user data collection - Azure Active Directory
+description: What information is used to help authenticate users by self-service password reset and Azure AD Multi-Factor Authentication?
Previously updated : 11/21/2019 Last updated : 01/07/2021
-# Azure AD Multi-Factor Authentication user data collection
+# Azure AD user data collection for multifactor authentication and self-service password reset
-This document explains how to find user information collected by Azure Multi-Factor Authentication Server (MFA Server) and Azure AD MFA (Cloud-based) in the event you would like to remove it.
+This document explains how to find user information collected by Azure Multi-Factor Authentication Server (MFA Server), Azure AD MFA (Cloud-based), and self-service password reset (SSPR) in the event you would like to remove it.
[!INCLUDE [gdpr-hybrid-note](../../../includes/gdpr-hybrid-note.md)]
-## Information collected
+## MFA information collected
MFA Server, the NPS Extension, and the Windows Server 2016 Azure AD MFA AD FS Adapter collect and store the following information for 90 days.
Use the [Microsoft Privacy Portal](https://portal.azure.com/#blade/Microsoft_Azu
- MFA information is included in the export, which may take hours or days to complete.
-## Delete Data for Azure AD MFA
+## Delete data for Azure AD MFA
Use the [Microsoft Privacy Portal](https://portal.azure.com/#blade/Microsoft_Azure_Policy/UserPrivacyMenuBlade/Overview) to make a request for Account Close to delete all MFA cloud service information collected for this user. - It may take up to 30 days for data to be fully removed.
+## Delete data for self-service password reset
+
+Users can add answers to security questions as part of SSPR. Security questions and answers are hashed to prevent unauthorized access. Only the hashed data is saved, so the security questions and answers can't be exported. Users can go to [My sign-ins](https://mysignins.microsoft.com/security-info) to edit or delete them. The only other information saved for SSPR is the user email address.
+
+Global Administrators can remove data collected for any user. On the **Users** page in Azure AD, click **Authentication methods** and select a user to remove their phone or email address.
+ ## Next steps [MFA Server reporting](howto-mfa-reporting.md)
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
On Windows 7, iOS, Android, and macOS Azure AD identifies the device using a cli
#### Chrome support
-For Chrome support in **Windows 10 Creators Update (version 1703)** or later, install the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji). This extension is required when a Conditional Access policy requires device-specific details.
+For Chrome support in **Windows 10 Creators Update (version 1703)** or later, install the [Windows 10 Accounts](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji) or [Office Online](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb) extensions. These extensions are required when a Conditional Access policy requires device-specific details.
To automatically deploy this extension to Chrome browsers, create the following registry key:
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-policies.md
Previously updated : 10/26/2021 Last updated : 01/11/2022
The behavior of the client apps condition was updated in August 2020. If you hav
This control is used to exclude devices that are hybrid Azure AD joined, or marked a compliant in Intune. This exclusion can be done to block unmanaged devices.
-#### Filters for devices (preview)
+#### Filter for devices
This control allows targeting specific devices based on their attributes in a policy.
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
With Conditional Access, organizations can restrict access to [approved (modern
For more information about the benefits of using app protection policies, see the article [App protection policies overview](/mem/intune/apps/app-protection-policy).
-## Template deployment
+## Create a Conditional Access policy
-Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+The policies below are put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policies apply as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
-## Create a Conditional Access policy
+### Require approved client apps or app protection policy with mobile devices
-The following steps will help create a Conditional Access policy requiring an approved client app **or** an app protection policy. This policy works in tandem with an [app protection policy created in Microsoft Intune](/mem/intune/apps/app-protection-policies).
+The following steps will help create a Conditional Access policy requiring an approved client app **or** an app protection policy when using an iOS/iPadOS or Android device. This policy will also prevent the use of Exchange ActiveSync clients using basic authentication on mobile devices. This policy works in tandem with an [app protection policy created in Microsoft Intune](/mem/intune/apps/app-protection-policies).
-This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others.
+Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates (Preview)](concept-conditional-access-policy-common.md#conditional-access-templates-preview).
1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator. 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
This policy is put in to [Report-only mode](howto-conditional-access-insights-re
1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy. 1. Select **Done**. 1. Under **Cloud apps or actions**, select **All cloud apps**.
- 1. Select **Done**.
1. Under **Conditions** > **Device platforms**, set **Configure** to **Yes**. 1. Under **Include**, **Select device platforms**. 1. Choose **Android** and **iOS**
This policy is put in to [Report-only mode](howto-conditional-access-insights-re
After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+### Block Exchange ActiveSync on all devices
+
+This policy will block all Exchange ActiveSync clients using basic authentication from connecting to Exchange Online.
+
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users and groups**
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and exclude at least one account to prevent yourself from being locked out. If you don't exclude any accounts, you can't create the policy.
+ 1. Select **Done**.
+1. Under **Cloud apps or actions**, select **Select apps**.
+ 1. Select **Office 365 Exchange Online**.
+ 1. Select **Select**.
+1. Under **Conditions** > **Client apps**, set **Configure** to **Yes**.
+ 1. Uncheck all options except **Exchange ActiveSync clients**.
+ 1. Select **Done**.
+1. Under **Access controls** > **Grant**, select **Grant access**.
+ 1. Select **Require app protection policy**
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+ ## Next steps [App protection policies overview](/intune/apps/app-protection-policy)
active-directory Redirect Uris Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/redirect-uris-ios.md
Given the following application registration in the Azure portal:
App1 uses redirect `msauth.com.contoso.app1://auth`.\ App2 uses `msauth.com.contoso.app2://auth`.\
-App3 uses `msauth.com.contoso.app1://auth`.
+App3 uses `msauth.com.contoso.app3://auth`.
### Migrating from ADAL to MSAL
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-primary-refresh-token.md
The following diagrams illustrate the underlying details in issuing, renewing, a
| F | Azure AD validates the Session key signature on the PRT cookie, validates the nonce, verifies that the device is valid in the tenant, and issues an ID token for the web page and an encrypted session cookie for the browser. | > [!NOTE]
-> The Browser SSO flow described in the steps above does not apply for sessions in private modes such as InPrivate in Microsoft Edge, Incognito in Google Chrome (when using the Microsoft Accounts extension) or in private mode in Mozilla Firefox v91+
+> The Browser SSO flow described in the steps above does not apply for sessions in private modes such as InPrivate in Microsoft Edge, Incognito in Google Chrome (when using the Microsoft Accounts or Office Online extensions) or in private mode in Mozilla Firefox v91+
## Next steps
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/tshoot-connect-connectivity.md
na Previously updated : 11/18/2021 Last updated : 01/11/2022
Of these URLs, the following table is the absolute bare minimum to be able to co
| mscrl.microsoft.com |HTTP/80 |Used to download CRL lists. | | \*.verisign.com |HTTP/80 |Used to download CRL lists. | | \*.entrust.net |HTTP/80 |Used to download CRL lists for MFA. |
-| \*.asazure.windows.net (Analysis Services)</br>\*.core.windows.net (Azure Storage)</br>\*.database.windows.net (SQL Server) </br>\*.graph.windows.net (Azure AD Graph)</br>\*.kusto.windows.net (Azure Data Explorer/Kusto)</br>\*.search.windows.net (search)</br>\*.servicebus.windows.net (Azure Service Bus)</br>|HTTPS/443|Used for the various Azure services|
+| \*.management.core.windows.net (Azure Storage)</br>\*.graph.windows.net (Azure AD Graph)|HTTPS/443|Used for the various Azure services|
| secure.aadcdn.microsoftonline-p.com |HTTPS/443 |Used for MFA. | | \*.microsoftonline.com |HTTPS/443 |Used to configure your Azure AD directory and import/export data. | | \*.crl3.digicert.com |HTTP/80 |Used to verify certificates. |
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
The following tutorials provide detailed guidance on implementing some of the mo
The advanced approach provides a more elaborate, yet flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would use this approach for scenarios not covered by the guided configuration templates.
-Refer to the following advanced configuration guides for your integration requirements:
+Refer to the following advanced configuration tutorials for your integration requirements:
- [F5 BIG-IP in Azure deployment walk-through](f5-bigip-deployment-guide.md)
The Guided Configuration wizard, available from BIG-IP version 13.1 aims to mini
The latest version of the Guided Configuration 16.1 now offers an Easy Button feature. With **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, without management overhead of having to do so on a per app basis.
-Refer to the following guided configuration guides using Easy Button templates for your integration requirements:
+Refer to the following guided configuration tutorials using Easy Button templates for your integration requirements:
- [F5 BIG-IP Easy Button for SSO to Kerberos applications](f5-big-ip-kerberos-easy-button.md)
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
In this article, you'll learn to implement Secure Hybrid Access (SHA) with singl
Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
-* Improved zero-trust governance through Azure AD pre-authentication and authorization
+* Improved Zero Trust governance through Azure AD pre-authentication and authorization
* Full SSO between Azure AD and BIG-IP published services
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
In this article, you'll learn to implement Secure Hybrid Access (SHA) with singl
Configuring BIG-IP published applications with Azure AD provides many benefits, including:
-* Improved zero-trust governance through Azure AD pre-authentication and authorization
+* Improved Zero Trust governance through Azure AD pre-authentication and authorization
* Full SSO between Azure AD and BIG-IP published services
active-directory Howto Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md
Previously updated : 12/21/2021 Last updated : 01/10/2022 -+ # How to use Azure Monitor workbooks for Azure Active Directory reports
You can filter the Risky Users trends by:
If you have a high number of risky users where "no action" has been taken, consider enabling a Conditional Access policy to require secure password change when a user is high risk.
+## Best practices
+
+### Query partially succeeded
+
+After running a workbook, you might see the following error: "Query partially succeeded; results may be incomplete or incorrect"
+
+This error means that your query timed out in the database layer. In this case, it still ΓÇ£succeededΓÇ¥ to workbooks (it got results) but the results also contained an error/warning message that some part of the query failed. In this case, you review your query and start troubleshooting by reducing the scope of it.
+For example, you could add or rearrange a where condition to reduce the amount of data the query has to process.
+++ ## Next steps * [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).
active-directory Asset Planner Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/asset-planner-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Asset Planner'
+description: Learn how to configure single sign-on between Azure Active Directory and Asset Planner.
++++++++ Last updated : 12/20/2021++++
+# Tutorial: Azure AD SSO integration with Asset Planner
+
+In this tutorial, you'll learn how to integrate Asset Planner with Azure Active Directory (Azure AD). When you integrate Asset Planner with Azure AD, you can:
+
+* Control in Azure AD who has access to Asset Planner.
+* Enable your users to be automatically signed-in to Asset Planner with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Asset Planner single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Asset Planner supports **SP** initiated SSO.
+
+* Asset Planner supports **Just In Time** user provisioning.
+
+## Add Asset Planner from the gallery
+
+To configure the integration of Asset Planner into Azure AD, you need to add Asset Planner from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Asset Planner** in the search box.
+1. Select **Asset Planner** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Asset Planner
+
+Configure and test Azure AD SSO with Asset Planner using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Asset Planner.
+
+To configure and test Azure AD SSO with Asset Planner, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Asset Planner SSO](#configure-asset-planner-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Asset Planner test user](#create-asset-planner-test-user)** - to have a counterpart of B.Simon in Asset Planner that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Asset Planner** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type one of the following URLs:
+
+ | **Identifier** |
+ ||
+ | `https://assetplanner.com` |
+ | `https://us.assetplanner.com` |
+ | `https://staging.assetplanner.com` |
+ | `https://training.assetplanner.com` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://assetplanner.com/saml/ap_acs/<IDPName>` |
+ | `https://us.assetplanner.com/saml/ap_acs/<IDPName>` |
+ | `https://staging.assetplanner.com/saml/ap_acs/<IDPName>` |
+ | `https://training.assetplanner.com/saml/ap_acs/<IDPName>` |
+
+ c. In the **Sign on URL** text box, type one of the following URLs:
+
+ | **Sign on URL** |
+ |--|
+ | `https://assetplanner.com` |
+ | `https://us.assetplanner.com` |
+ | `https://staging.assetplanner.com` |
+ | `https://training.assetplanner.com` |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Asset Planner** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Asset Planner.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Asset Planner**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Asset Planner SSO
+
+To configure single sign-on on **Asset Planner** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Asset Planner support team](mailto:support@assetplanner.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Asset Planner test user
+
+In this section, a user called Britta Simon is created in Asset Planner. Asset Planner supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Asset Planner, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Asset Planner Sign-on URL where you can initiate the login flow.
+
+* Go to Asset Planner Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Asset Planner tile in the My Apps, this will redirect to Asset Planner Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Asset Planner you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Blue Ocean Brain Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/blue-ocean-brain-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Blue Ocean Brain'
+description: Learn how to configure single sign-on between Azure Active Directory and Blue Ocean Brain.
++++++++ Last updated : 12/30/2021++++
+# Tutorial: Azure AD SSO integration with Blue Ocean Brain
+
+In this tutorial, you'll learn how to integrate Blue Ocean Brain with Azure Active Directory (Azure AD). When you integrate Blue Ocean Brain with Azure AD, you can:
+
+* Control in Azure AD who has access to Blue Ocean Brain.
+* Enable your users to be automatically signed-in to Blue Ocean Brain with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Blue Ocean Brain single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Blue Ocean Brain supports **SP and IDP** initiated SSO.
+* Blue Ocean Brain supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Blue Ocean Brain from the gallery
+
+To configure the integration of Blue Ocean Brain into Azure AD, you need to add Blue Ocean Brain from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Blue Ocean Brain** in the search box.
+1. Select **Blue Ocean Brain** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Blue Ocean Brain
+
+Configure and test Azure AD SSO with Blue Ocean Brain using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Blue Ocean Brain.
+
+To configure and test Azure AD SSO with Blue Ocean Brain, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Blue Ocean Brain SSO](#configure-blue-ocean-brain-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Blue Ocean Brain test user](#create-blue-ocean-brain-test-user)** - to have a counterpart of B.Simon in Blue Ocean Brain that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Blue Ocean Brain** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://www3.blueoceanbrain.com`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://www3.blueoceanbrain.com/c/<friendly id>/saml/acs`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://www3.blueoceanbrain.com/c/<friendly id>/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [Blue Ocean Brain Client support team](mailto:support@blueoceanbrain.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Blue Ocean Brain application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Blue Ocean Brain application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | -| |
+ | FirstName | user.givenname |
+ | LastName | user.surname |
+ | Email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Blue Ocean Brain.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Blue Ocean Brain**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Blue Ocean Brain SSO
+
+To configure single sign-on on **Blue Ocean Brain** side, you need to send the **App Federation Metadata Url** to [Blue Ocean Brain support team](mailto:support@blueoceanbrain.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Blue Ocean Brain test user
+
+In this section, a user called Britta Simon is created in Blue Ocean Brain. Blue Ocean Brain supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Blue Ocean Brain, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Blue Ocean Brain Sign on URL where you can initiate the login flow.
+
+* Go to Blue Ocean Brain Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Blue Ocean Brain for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Blue Ocean Brain tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Blue Ocean Brain for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Blue Ocean Brain you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Drtrack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/drtrack-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with DRTrack'
+description: Learn how to configure single sign-on between Azure Active Directory and DRTrack.
++++++++ Last updated : 12/29/2021++++
+# Tutorial: Azure AD SSO integration with DRTrack
+
+In this tutorial, you'll learn how to integrate DRTrack with Azure Active Directory (Azure AD). When you integrate DRTrack with Azure AD, you can:
+
+* Control in Azure AD who has access to DRTrack.
+* Enable your users to be automatically signed-in to DRTrack with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* DRTrack single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* DRTrack supports **SP and IDP** initiated SSO.
+
+## Add DRTrack from the gallery
+
+To configure the integration of DRTrack into Azure AD, you need to add DRTrack from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **DRTrack** in the search box.
+1. Select **DRTrack** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for DRTrack
+
+Configure and test Azure AD SSO with DRTrack using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in DRTrack.
+
+To configure and test Azure AD SSO with DRTrack, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure DRTrack SSO](#configure-drtrack-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create DRTrack test user](#create-drtrack-test-user)** - to have a counterpart of B.Simon in DRTrack that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **DRTrack** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ ||
+ | `https://<CustomerName>.appiangps.com` |
+ | `https://<CustomerName>.routetracking.com` |
+ | `https://<CustomerName>.appiantracking.com` |
+ | `https://<CustomerName>.drtrack.trimblemaps.com` |
+ | `https://<CustomerName>.staging.appiantesting.com` |
+ | `https://<CustomerName>.qa.appiantesting` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |--|
+ | `https://<CustomerName>.appiangps.com/AssertionConsumer.aspx` |
+ | `https://<CustomerName>.routetracking.com/AssertionConsumer.aspx` |
+ | `https://<CustomerName>.appiantracking.com/AssertionConsumer.aspx` |
+ | `https://<CustomerName>.drtrack.trimblemaps.com/AssertionConsumer.a` |
+
+ c. In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ ||
+ | `https://<CustomerName>.appiangps.com/Login.aspx` |
+ | `https://<CustomerName>.routetracking.com/Login.aspx` |
+ | `https://<CustomerName>.appiantracking.com/Login.aspx` |
+ | `https://<CustomerName>.drtrack.trimblemaps.com/Login.aspx` |
+ | `https://<CustomerName>.staging.appiantesting` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [DRTrack Client support team](mailto:support-appian@trimblemaps.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up DRTrack** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to DRTrack.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **DRTrack**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure DRTrack SSO
+
+To configure single sign-on on **DRTrack** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [DRTrack support team](mailto:support-appian@trimblemaps.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create DRTrack test user
+
+In this section, you create a user called Britta Simon in DRTrack. Work with [DRTrack support team](mailto:support-appian@trimblemaps.com) to add the users in the DRTrack platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to DRTrack Sign on URL where you can initiate the login flow.
+
+* Go to DRTrack Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the DRTrack for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the DRTrack tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the DRTrack for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure DRTrack you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Excelity Hcm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/excelity-hcm-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Excelity HCM'
+description: Learn how to configure single sign-on between Azure Active Directory and Excelity HCM.
++++++++ Last updated : 12/10/2021++++
+# Tutorial: Azure AD SSO integration with Excelity HCM
+
+In this tutorial, you'll learn how to integrate Excelity HCM with Azure Active Directory (Azure AD). When you integrate Excelity HCM with Azure AD, you can:
+
+* Control in Azure AD who has access to Excelity HCM.
+* Enable your users to be automatically signed-in to Excelity HCM with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Excelity HCM single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Excelity HCM supports **IDP** initiated SSO.
+
+## Add Excelity HCM from the gallery
+
+To configure the integration of Excelity HCM into Azure AD, you need to add Excelity HCM from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Excelity HCM** in the search box.
+1. Select **Excelity HCM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Excelity HCM
+
+Configure and test Azure AD SSO with Excelity HCM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Excelity HCM.
+
+To configure and test Azure AD SSO with Excelity HCM, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Excelity HCM SSO](#configure-excelity-hcm-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Excelity HCM test user](#create-excelity-hcm-test-user)** - to have a counterpart of B.Simon in Excelity HCM that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Excelity HCM** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. Excelity HCM application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Excelity HCM application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | --| |
+ | country | user.country |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Excelity HCM.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Excelity HCM**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Excelity HCM SSO
+
+To configure single sign-on on **Excelity HCM** side, you need to send the **App Federation Metadata Url** to [Excelity HCM support team](mailto:HCM.Support@ceridian.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Excelity HCM test user
+
+In this section, you create a user called Britta Simon in Excelity HCM. Work with [Excelity HCM support team](mailto:HCM.Support@ceridian.com) to add the users in the Excelity HCM platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Excelity HCM for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Excelity HCM tile in the My Apps, you should be automatically signed in to the Excelity HCM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Excelity HCM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Flexera One Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/flexera-one-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Flexera One'
+description: Learn how to configure single sign-on between Azure Active Directory and Flexera One.
++++++++ Last updated : 12/29/2021++++
+# Tutorial: Azure AD SSO integration with Flexera One
+
+In this tutorial, you'll learn how to integrate Flexera One with Azure Active Directory (Azure AD). When you integrate Flexera One with Azure AD, you can:
+
+* Control in Azure AD who has access to Flexera One.
+* Enable your users to be automatically signed-in to Flexera One with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Flexera One single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Flexera One supports **SP and IDP** initiated SSO.
+* Flexera One supports **Just In Time** user provisioning.
+
+## Add Flexera One from the gallery
+
+To configure the integration of Flexera One into Azure AD, you need to add Flexera One from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Flexera One** in the search box.
+1. Select **Flexera One** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Flexera One
+
+Configure and test Azure AD SSO with Flexera One using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Flexera One.
+
+To configure and test Azure AD SSO with Flexera One, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Flexera One SSO](#configure-flexera-one-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Flexera One test user](#create-flexera-one-test-user)** - to have a counterpart of B.Simon in Flexera One that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Flexera One** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://secure.flexera.com/sso/saml2/<ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://secure.flexera.com/sso/saml2/<ID>`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://secure.flexera.com/sso/saml2/<ID>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Flexera One Client support team](mailto:support@flexera.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Flexera One application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Flexera One application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | - | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Flexera One** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Flexera One.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Flexera One**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Flexera One SSO
+
+To configure single sign-on on **Flexera One** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Flexera One support team](mailto:support@flexera.com). They set this setting to have the SAML SSO connection set properly on both sides. Learn [how](https://docs.flexera.com/flexera/EN/Administration/AzureADSSO.htm).
+
+### Create Flexera One test user
+
+In this section, a user called Britta Simon is created in Flexera One. Flexera One supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Flexera One, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Flexera One Sign on URL where you can initiate the login flow.
+
+* Go to Flexera One Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Flexera One for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Flexera One tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Flexera One for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Flexera One you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Intacct Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/intacct-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Sage Intacct | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory integration with Sage Intacct'
description: Learn how to configure single sign-on between Azure Active Directory and Sage Intacct.
Previously updated : 01/15/2021 Last updated : 01/05/2022
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Sage Intacct supports **IDP** initiated SSO
+* Sage Intacct supports **IDP** initiated SSO.
## Adding Sage Intacct from the gallery
To configure and test Azure AD SSO with Sage Intacct, complete the following ste
1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 2. **[Configure Sage Intacct SSO](#configure-sage-intacct-sso)** - to configure the Single Sign-On settings on application side.
- 1. **[Create Sage Intacct test user](#create-sage-intacct-test-user)** - to have a counterpart of B.Simon in Sage Intacct that is linked to the Azure AD representation of user.
+ 1. **[Set up individual users in Intacct](#set-up-individual-users-in-intacct)** - to have a counterpart of B.Simon in Sage Intacct that is linked to the Azure AD representation of user.
6. **[Test SSO](#test-sso)** - to verify whether the configuration works. ### Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, enter the values for the following fields: In the **Reply URL** text box, add the following URLs:
- `https://www.intacct.com/ia/acct/sso_response.phtml` (Select as the default.)
- `https://www.p-02.intacct.com/ia/acct/sso_response.phtml`
- `https://www.p-03.intacct.com/ia/acct/sso_response.phtml`
- `https://www.p-04.intacct.com/ia/acct/sso_response.phtml`
- `https://www.p-05.intacct.com/ia/acct/sso_response.phtml`
+
+ | Reply URL |
+ | - |
+ | `https://www.intacct.com/ia/acct/sso_response.phtml` (Select as the default.) |
+ | `https://www.p-02.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www.p-03.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www.p-04.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www.p-05.intacct.com/ia/acct/sso_response.phtml` |
+ |
1. The Sage Intacct application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog..
Follow these steps to enable Azure AD SSO in the Azure portal.
| Attribute Name | Source Attribute| | | | | Company Name | **Sage Intacct Company ID** |
- | name | Value should be same as the Sage Intacct **User ID**, which you enter in the **Create Sage Intacct test user section**, which is explained later in the tutorial |
+ | name | `<User ID>`|
+
+ > [!NOTE]
+ > Enter the `<User ID>` value should be same as the Sage Intacct **User ID**, which you enter in the **[Set up individual users in Intacct](#set-up-individual-users-in-intacct)**, which is explained later in the tutorial
a. Click **Add new claim** to open the **Manage user claims** dialog.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to your Sage Intacct company site as an administrator.
-1. Click the **Company** tab, and then click **Company Info**.
+1. Go to **Company**, click the **Setup** tab, and click **Company** under the Configuration section.
- ![Company](./media/intacct-tutorial/ic790037.png "Company")
+ ![Company](./media/intacct-tutorial/setup.png)
1. Click the **Security** tab, and then click **Edit**.
- ![Security](./media/intacct-tutorial/ic790038.png "Security")
+ ![screenshot for Security](./media/intacct-tutorial/security.png "Security")
1. In the **Single sign on (SSO)** section, perform the following steps:
- ![Single sign on](./media/intacct-tutorial/ic790039.png "single sign on")
+ ![Single sign on](./media/intacct-tutorial/intacct-configuration.png)
a. Select **Enable single sign on**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
d. In **Login URL** textbox, paste the value of **Login URL**, which you have copied from Azure portal. e. Open your **base-64** encoded certificate in notepad, copy the content of it into your clipboard, and then paste it to the **Certificate** box.
+
+ f. Set **Requested authentication content type** to **Exact**.
- f. Click **Save**.
+ g. Click **Save**.
-### Create Sage Intacct test user
+### Set up individual users in Intacct
-To set up Azure AD users so they can sign in to Sage Intacct, they must be provisioned into Sage Intacct. For Sage Intacct, provisioning is a manual task.
+When SSO is enabled for your company, you can individually require users to use SSO when logging in to your company. After you set up a user for SSO, the user will no longer be able to use a password to log in to your company directly. Instead, that user will need to use single sign-on and will be authenticated by your SSO identity provider as being an authorized user. Any users who aren't set up for SSO can continue to log in to your company using the basic signin page.
-**To provision user accounts, perform the following steps:**
+**To enable SSO for a user, perform the following steps:**
1. Sign in to your **Sage Intacct** tenant.
-1. Click the **Company** tab, and then click **Users**.
-
- ![Users](./media/intacct-tutorial/ic790041.png "Users")
-
-1. Click the **Add** tab.
+1. Go to **Company**, click the **Admin** tab, then click **Users**.
- ![Add](./media/intacct-tutorial/ic790042.png "Add")
+ ![Screenshot for Users](./media/intacct-tutorial/users.png "Users")
-1. In the **User Information** section, perform the following steps:
+1. Locate the desired user and click **Edit** next to it.
- ![Screenshot shows the User Information section where you can enter the information in this step.](./media/intacct-tutorial/ic790043.png "User Information")
-
- a. Enter the **User ID**, the **Last name**, **First name**, the **Email address**, the **Title**, and the **Phone** of an Azure AD account that you want to provision into the **User Information** section.
-
- > [!NOTE]
- > Make sure that the **User ID** in above screenshot and the **Source Attribute** value which is mapped with the **name** attribute in the **User Attributes** section in the Azure portal should be same.
-
- b. Select the **Admin privileges** of an Azure AD account that you want to provision.
-
- c. Click **Save**.
-
- d. The Azure AD account holder receives an email and follows a link to confirm their account before it becomes active.
+ ![Edit the user](./media/intacct-tutorial/user-edit.png "edit")
1. Click **Single sign-on** tab and make sure that the **Federated SSO user ID** in below screenshot and the **Source Attribute** value which is mapped with the `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier` in the **User Attributes** section in the Azure portal should be same.
- ![Screenshot shows the User Information section where you can enter the Federated S S O user i d.](./media/intacct-tutorial/ic790044.png "User Information")
+ ![Screenshot shows the User Information section where you can enter the Federated S S O user i d.](./media/intacct-tutorial/user-information.png "User Information")
> [!NOTE] > To provision Azure AD user accounts, you can use other Sage Intacct user account creation tools or APIs that are provided by Sage Intacct.
active-directory Jooto Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/jooto-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Jooto'
+description: Learn how to configure single sign-on between Azure Active Directory and Jooto.
++++++++ Last updated : 12/29/2021++++
+# Tutorial: Azure AD SSO integration with Jooto
+
+In this tutorial, you'll learn how to integrate Jooto with Azure Active Directory (Azure AD). When you integrate Jooto with Azure AD, you can:
+
+* Control in Azure AD who has access to Jooto.
+* Enable your users to be automatically signed-in to Jooto with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Jooto single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Jooto supports **SP and IDP** initiated SSO.
+* Jooto supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Jooto from the gallery
+
+To configure the integration of Jooto into Azure AD, you need to add Jooto from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Jooto** in the search box.
+1. Select **Jooto** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Jooto
+
+Configure and test Azure AD SSO with Jooto using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Jooto.
+
+To configure and test Azure AD SSO with Jooto, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Jooto SSO](#configure-jooto-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Jooto test user](#create-jooto-test-user)** - to have a counterpart of B.Simon in Jooto that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Jooto** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **SP** initiated mode then perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://app.jooto.com/`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://app.jooto.com/auth/sso/callback`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://app.jooto.com/auth/sso/callback`
+
+ d. In the **Relay State** text box, type a value using the following pattern:
+ `<ID>`
+
+1. Jooto application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Jooto application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute |
+ | -- | |
+ | first_name | user.givenname |
+ | last_name | user.surname |
+ | email | user.mail |
+ | username | user.userprincipalname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Jooto** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Jooto.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Jooto**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Jooto SSO
+
+To configure single sign-on on **Jooto** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Jooto support team](mailto:jooto-success@prtimes.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Jooto test user
+
+In this section, a user called Britta Simon is created in Jooto. Jooto supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Jooto, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Jooto Sign on URL where you can initiate the login flow.
+
+* Go to Jooto Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Jooto for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Jooto tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Jooto for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Jooto you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-configuration.md
az aks update -n aks -g myResourceGroup --enable-oidc-issuer
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd-preview
+[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-security.md
For more information on core Kubernetes and AKS concepts, see:
[aks-daemonsets]: concepts-clusters-workloads.md#daemonsets [aks-upgrade-cluster]: upgrade-cluster.md [aks-aad]: ./managed-aad.md
-[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd-preview
+[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd
[aks-concepts-clusters-workloads]: concepts-clusters-workloads.md [aks-concepts-identity]: concepts-identity.md [aks-concepts-scale]: concepts-scale.md
aks Windows Container Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-container-cli.md
az aks nodepool add \
The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
-## Optional: Using `containerd` with Windows Server node pools (preview)
+## Optional: Using `containerd` with Windows Server node pools
-Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` as the container runtime for Windows Server 2019 node pools.
+Beginning in Kubernetes version 1.20 and greater, you can specify `containerd` as the container runtime for Windows Server 2019 node pools. From Kubernetes 1.23, containerd will be the default container runtime for Windows.
-
-You will need the *aks-preview* Azure CLI extension version 0.5.24 or greater. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
> [!IMPORTANT] > When using `containerd` with Windows Server 2019 node pools:
az extension update --name aks-preview
> - When creating or updating a node pool to run Windows Server containers, the default value for *node-vm-size* is *Standard_D2s_v3* which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the *node-vm-size* parameter, please check the list of [restricted VM sizes][restricted-vm-sizes]. > - It is highly recommended that you use [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly.
-Register the `UseCustomizedWindowsContainerRuntime` feature flag using the [az feature register][az-feature-register] command as shown in the following example:
-
-```azurecli
-az feature register --namespace "Microsoft.ContainerService" --name "UseCustomizedWindowsContainerRuntime"
-```
-
-You can check on the registration status using the [az feature list][az-feature-list] command:
-
-```azurecli
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/UseCustomizedWindowsContainerRuntime')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the Microsoft.ContainerService resource provider using the [az provider register][az-provider-register] command:
-
-```azurecli
-az provider register --namespace Microsoft.ContainerService
-```
-### Add a Windows Server node pool with `containerd` (preview)
+### Add a Windows Server node pool with `containerd`
Use the `az aks nodepool add` command to add an additional node pool that can run Windows Server containers with the `containerd` runtime.
az aks nodepool add \
The above command creates a new Windows Server node pool using `containerd` as the runtime named *npwcd* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
-### Upgrade an existing Windows Server node pool to `containerd` (preview)
+### Upgrade an existing Windows Server node pool to `containerd`
Use the `az aks nodepool upgrade` command to upgrade a specific node pool from Docker to `containerd`.
api-management Add Api Manually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/add-api-manually.md
Previously updated : 04/26/2021 Last updated : 12/16/2021 -+ # Add an API manually
-The steps in this article show how to use the Azure portal to add an API manually to the API Management (APIM) instance. A common scenario when you would want to create a blank API and define it manually is when you want to mock the API. For details about mocking an API, see [Mock API responses](mock-api-responses.md).
+This article shows steps to add an API manually to the API Management (APIM) instance. When you want to mock the APIAc, you can create a blank API or define it manually. For details about mocking an API, see [Mock API responses](mock-api-responses.md).
If you want to import an existing API, see [related topics](#related-topics) section.
Complete the following quickstart: [Create an Azure API Management instance](get
1. Navigate to your API Management service in the Azure portal and select **APIs** from the menu. 2. From the left menu, select **+ Add API**.
-3. Select **Blank API** from the list.
- ![Blank API](media/add-api-manually/blank-api.png)
+3. Select **HTTP** from the list.
+ ![Blank API](media/add-api-manually/blank-api-1.png)
4. Enter settings for the API. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial. 5. Select **Create**. At this point, you have no operations in API Management that map to the operations in your back-end API. If you call an operation that is exposed through the back end but not through the API Management, you get a **404**.
->[!NOTE]
-> By default, when you add an API, even if it is connected to some back-end service, APIM will not expose any operations until you allow them. To allow an operation of your back-end service, create an APIM operation that maps to the back-end operation.
+>[!NOTE]
+> By default, when you add an API, even if it's connected to some back-end service, APIM won't expose any operations until you allow them. To allow an operation of your back-end service, create an APIM operation that maps to the back-end operation.
## Add and test an operation
-This section shows how to add a "/get" operation in order to map it to the back end "http://httpbin.org/get" operation.
+This section shows how to add a "/get" operation to map it to the back end "http://httpbin.org/get" operation.
### Add an operation 1. Select the API you created in the previous step.
-2. Click **+ Add Operation**.
+2. Select **+ Add Operation**.
3. In the **URL**, select **GET** and enter `/get` in the resource. 4. Enter "*FetchData*" for **Display name**. 5. Select **Save**. ### Test an operation
-Test the operation in the Azure portal. Alternatively, you can test it in the **Developer portal**.
+Test the operation in the Azure portal. You can also test it in the **Developer portal**.
1. Select the **Test** tab. 2. Select **FetchData**.
This section shows how to add an operation that takes a parameter. In this case,
### Add the operation 1. Select the API you created in the previous step.
-2. Click **+ Add Operation**.
+2. Select **+ Add Operation**.
3. In the **URL**, select **GET** and enter `/status/{code}` in the resource. Optionally, you can provide some information associated with this parameter. For example, enter "*Number*" for **TYPE**, "*200*" (default) for **VALUES**. 4. Enter "GetStatus" for **Display name**. 5. Select **Save**.
-### Test the operation
+### Test the operation
-Test the operation in the Azure portal. Alternatively, you can test it in the **Developer portal**.
+Test the operation in the Azure portal. You can also test it in the **Developer portal**.
1. Select the **Test** tab. 2. Select **GetStatus**. By default the code value is set to "*200*". You can change it to test other values. For example, type "*418*".
This section shows how to add a wildcard operation. A wildcard operation lets yo
### Add the operation 1. Select the API you created in the previous step.
-2. Click **+ Add Operation**.
+2. Select **+ Add Operation**.
3. In the **URL**, select **GET** and enter `/*` in the resource. 4. Enter "*WildcardGet*" for **Display name**. 5. Select **Save**. ### Test the operation
-Test the operation in the Azure portal. Alternatively, you can test it in the **Developer portal**.
+Test the operation in the Azure portal. You can also test it in the **Developer portal**.
1. Select the **Test** tab.
-2. Select **WildcardGet**. Try one or more of the GET operations that you tested in previous sections, or try a different supported GET operation.
+2. Select **WildcardGet**. Try the GET operations that you tested in previous sections, or try a different supported GET operation.
For example, in **Template parameters**, update the value next to the wildcard (*) name to `headers`. The operation returns the incoming request's HTTP headers. 1. Press **Send**. The response that the "http://httpbin.org/headers" operation generates appears. If you want to transform your operations, see [Transform and protect your API](transform-api.md).
->[!NOTE]
-> It can be important to understand how the host for the backend API you are integrating with handles trailing slashes on an operation URL. For more information, please see this [API Management FAQ](./api-management-faq.yml#how-does-api-management-handle-trailing-slashes-when-calling-backend-services-).
+>[!NOTE]
+> It can be important to understand how the host for the backend API you're integrating with handles trailing slashes on an operation URL. For more information, see this [API Management FAQ](./api-management-faq.yml#how-does-api-management-handle-trailing-slashes-when-calling-backend-services-).
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-append-apis.md)]
api-management Api Management Howto Api Inspector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-api-inspector.md
documentationcenter: '' editor: ''- Previously updated : 10/30/2020 Last updated : 12/10/2021 -+ # Tutorial: Debug your APIs using request tracing
-This tutorial describes how to inspect (trace) request processing in Azure API Management to help you debug and troubleshoot your API.
+This tutorial describes how to inspect (trace) request processing in Azure API Management. Tracing helps you debug and troubleshoot your API.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Trace an example call > * Review request processing steps ## Prerequisites
In this tutorial, you learn how to:
+ Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md). + Complete the following tutorial: [Import and publish your first API](import-and-publish.md).
-## Verify allow tracing setting
+## Verify allow tracing setting
The **Allow tracing** setting for the subscription used for your API must be enabled. If you're using the built-in all-access subscription, it's enabled by default. To verify in the portal, navigate to your API Management instance and select **Subscriptions**.
- :::image type="content" source="media/api-management-howto-api-inspector/allow-tracing.png" alt-text="Allow tracing for subscription":::
+ :::image type="content" source="media/api-management-howto-api-inspector/allow-tracing-1.png" alt-text="Allow tracing for subscription":::
## Trace a call
The **Allow tracing** setting for the subscription used for your API must be ena
1. Select **Demo Conference API** from your API list. 1. Select the **Test** tab. 1. Select the **GetSpeakers** operation.
-1. Confirm that the HTTP request header includes **Ocp-Apim-Trace: True** and a valid value for **Ocp-Apim-Subscription-Key**. If it isn't, select **+ Add Header** to add the header.
+1. Confirm that the HTTP request header includes **Ocp-Apim-Trace: True** and a valid value for **Ocp-Apim-Subscription-Key**. If it doesn't, select **+ Add header** to add the header.
1. Select **Send** to make an API call.
- :::image type="content" source="media/api-management-howto-api-inspector/06-debug-your-apis-01-trace-call.png" alt-text="Configure API tracing":::
+ :::image type="content" source="media/api-management-howto-api-inspector/06-debug-your-apis-01-trace-call-1.png" alt-text="Configure API tracing":::
> [!TIP] > If **Ocp-Apim-Subscription-Key** isn't automatically populated in the HTTP request, you can retrieve it in the portal. Select **Subscriptions**, and open the context menu (**...**) for your suscription. Select **Show/hide keys**. You can also regenerate keys if needed. Then, add a key to the header. ## Review trace information
-1. After the call completes, go to the **Trace** tab in the **HTTP Response**.
+1. After the call completes, go to the **Trace** tab in the **HTTP response**.
1. Select any of the following links to jump to detailed trace info: **Inbound**, **Backend**, **Outbound**.
- :::image type="content" source="media/api-management-howto-api-inspector/response-trace.png" alt-text="Review response trace":::
+ :::image type="content" source="media/api-management-howto-api-inspector/response-trace-1.png" alt-text="Review response trace":::
- * **Inbound** - Shows the original request API Management received from the caller and the policies applied to the request. For example, if you added policies in [Tutorial: Transform and protect your API](transform-api.md), they will appear here.
+ * **Inbound** - Shows the original request API Management received from the caller and the policies applied to the request. For example, if you added policies in [Tutorial: Transform and protect your API](transform-api.md), they'll appear here.
* **Backend** - Shows the requests API Management sent to the API backend and the response it received.
The **Allow tracing** setting for the subscription used for your API must be ena
1. On the **Message** tab, the **ocp-apim-trace-location** header shows the location of the trace data stored in Azure blob storage. If needed, go to this location to retrieve the trace.
- :::image type="content" source="media/api-management-howto-api-inspector/response-message.png" alt-text="Trace location in Azure Storage":::
+ :::image type="content" source="media/api-management-howto-api-inspector/response-message-1.png" alt-text="Trace location in Azure Storage":::
## Next steps In this tutorial, you learned how to:
api-management Api Management Howto Use Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-use-azure-monitor.md
Title: Tutorial - Monitor published APIs in Azure API Management | Microsoft Docs
-description: Follow the steps of this tutorial to learn how to use metrics, alerts, activity logs, and resource logs to monitor your APIs in Azure API Management.
+description: Learn how to use metrics, alerts, activity logs, and resource logs to monitor your APIs in Azure API Management.
-+ Previously updated : 10/14/2020 Last updated : 12/09/2021 # Tutorial: Monitor published APIs
With Azure Monitor, you can visualize, query, route, archive, and take actions o
In this tutorial, you learn how to: > [!div class="checklist"]
-> * View metrics of your API
-> * Set up an alert rule
+> * View metrics of your API
+> * Set up an alert rule
> * View activity logs > * Enable and view resource logs
You can also use API Management's built-in [analytics](howto-use-analytics.md) t
API Management emits [metrics](../azure-monitor/essentials/data-platform-metrics.md) every minute, giving you near real-time visibility into the state and health of your APIs. The following are the two most frequently used metrics. For a list of all available metrics, see [supported metrics](../azure-monitor/essentials/metrics-supported.md#microsoftapimanagementservice). * **Capacity** - helps you make decisions about upgrading/downgrading your APIM services. The metric is emitted per minute and reflects the gateway capacity at the time of reporting. The metric ranges from 0-100 calculated based on gateway resources such as CPU and memory utilization.
-* **Requests** - helps you analyze API traffic going through your API Management services. The metric is emitted per minute and reports the number of gateway requests with dimensions including response codes, location, hostname, and errors.
+* **Requests** - helps you analyze API traffic going through your API Management services. The metric is emitted per minute and reports the number of gateway requests with dimensions. Filter requests by response codes, location, hostname, and errors.
> [!IMPORTANT] > The following metrics have been deprecated as of May 2019 and will be retired in August 2023: Total Gateway Requests, Successful Gateway Requests, Unauthorized Gateway Requests, Failed Gateway Requests, Other Gateway Requests. Please migrate to the Requests metric which provides equivalent functionality. To access metrics:
To access metrics:
:::image type="content" source="media/api-management-howto-use-azure-monitor/api-management-metrics-blade.png" alt-text="Screenshot of Metrics item in Monitoring menu":::
-1. From the drop-down, select metrics you are interested in. For example, **Requests**.
+1. From the drop-down, select metrics you're interested in. For example, **Requests**.
1. The chart shows the total number of API calls.
-1. The chart can be filtered using the dimensions of the **Requests** metric. For example, select **Add filter**, select **Backend Response Code Category**, enter 500 as the value. Now the chart shows the number of requests that were failed in the API backend.
+1. You can filter the chart using the dimensions of the **Requests** metric. For example, select **Add filter**, select **Backend Response Code Category**, enter 500 as the value. The chart shows the number of requests failed in the API backend.
-## Set up an alert rule
+## Set up an alert rule
You can receive [alerts](../azure-monitor/alerts/alerts-metric-overview.md) based on metrics and activity logs. Azure Monitor allows you to [configure an alert](../azure-monitor/alerts/alerts-metric.md) to do the following when it triggers:
To configure an example alert rule based on a request metric:
1. In **Signal name**, select **Requests**. 1. In **Split by dimensions**, in **Dimension name**, select **Gateway Response Code Category**. 1. In **Dimension values**, select **4xx**, for client errors such as unauthorized or invalid requests.
- 1. In **Alert logic**, specify a threshold after which the alert should be triggered and select **Done**.
+ 1. In **Alert logic**, specify a **Threshold value** after which the alert should be triggered.
+ 1. In **Evaluated based on**, specify **Aggregation granularity** and **Frequency of evaluation**, then select **Done**.
- :::image type="content" source="media/api-management-howto-use-azure-monitor/threshold.png" alt-text="Screenshot of Configure Signal Logic windows":::
+ :::image type="content" source="media/api-management-howto-use-azure-monitor/threshold-1.png" alt-text="Screenshot of Configure Signal Logic windows":::
1. Select an existing action group or create a new one. In the following example, a new action group is created. A notification email will be sent to admin@contoso.com. :::image type="content" source="media/api-management-howto-use-azure-monitor/action-details.png" alt-text="Screenshot of notifications for new action group":::
-1. Enter a name and description of the alert rule and select the severity level.
+1. Enter a name and description of the alert rule and select the severity level.
1. Select **Create alert rule**. 1. Now, test the alert rule by calling the Conference API without an API key. For example:
To configure an example alert rule based on a request metric:
curl GET https://apim-hello-world.azure-api.net/conference/speakers HTTP/1.1 ```
- An alert will be triggered based on the evaluation period, and email will be sent to admin@contoso.com.
+ An alert triggers based on the evaluation period, and it will send email to admin@contoso.com.
Alerts also appear on the **Alerts** page for the API Management instance.
To configure an example alert rule based on a request metric:
## Activity logs
-Activity logs provide insight into the operations that were performed on your API Management services. Using activity logs, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) taken on your API Management services.
+Activity logs provide insight into the operations on your API Management services. Using activity logs, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) taken on your API Management services.
> [!NOTE] > Activity logs do not include read (GET) operations or operations performed in the Azure portal or using the original Management APIs.
To view the activity log:
## Resource logs
-Resource logs provide rich information about operations and errors that are important for auditing as well as troubleshooting purposes. Resource logs differ from activity logs. The activity log provides insights into the operations that were performed on your Azure resources. Resource logs provide insight into operations that your resource performed.
+Resource logs provide rich information about operations and errors that are important for auditing and troubleshooting purposes. Resource logs differ from activity logs. The activity log provides insights into the operations run on your Azure resources. Resource logs provide insight into operations that your resource ran.
To configure resource logs:
To configure resource logs:
1. Select **+ Add diagnostic setting**. 1. Select the logs or metrics that you want to collect.
- You can archive resource logs along with metrics to a storage account, stream them to an Event Hub, or send them to a Log Analytics workspace.
+ You can archive resource logs along with metrics to a storage account, stream them to an Event Hub, or send them to a Log Analytics workspace.
For more information, see [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
In this tutorial, you learned how to:
> [!div class="checklist"] > * View metrics of your API
-> * Set up an alert rule
+> * Set up an alert rule
> * View activity logs > * Enable and view resource logs
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
To enable monitoring of the self-hosted gateway, configure the following Log Ana
* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md). * Discover all [Azure Arc-enabled Kubernetes extensions](../azure-arc/kubernetes/extensions.md). * Learn more about [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md).
+* Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
This article provides the steps for deploying self-hosted gateway component of A
* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md). * Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
+* Learn more about guidance to [run the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
* Learn more about [Azure Kubernetes Service](../aks/intro-kubernetes.md). * Learn [how to configure and persist logs in the cloud](how-to-configure-cloud-metrics-logs.md). * Learn [how to configure and persist logs locally](how-to-configure-local-metrics-logs.md).
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
This article provides the steps for deploying self-hosted gateway component of A
```console helm install azure-api-management-gateway \
- --set gateway.endpoint='<your token>' \
- --set gateway.authKey='<your configuration url>' \
+ --set gateway.endpoint='<your configuration url>' \
+ --set gateway.authKey='<your token>' \
azure-apim-gateway/azure-api-management-gateway ```
This article provides the steps for deploying self-hosted gateway component of A
* Learn [how to configure and persist logs locally](how-to-configure-local-metrics-logs.md). [helm]: https://helm.sh/
-[helm-install]: https://helm.sh/docs/intro/install/
+[helm-install]: https://helm.sh/docs/intro/install/
api-management How To Deploy Self Hosted Gateway Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes.md
This article describes the steps for deploying the self-hosted gateway component
> Run the `kubectl logs deployment/<gateway-name>` command to view logs from a randomly selected pod if there's more than one. > Run `kubectl logs -h` for a complete set of command options, such as how to view logs for a specific pod or container.
-## Production deployment considerations
-
-### Access token
-Without a valid access token, a self-hosted gateway can't access and download configuration data from the endpoint of the associated API Management service. The access token can be valid for a maximum of 30 days. It must be regenerated, and the cluster configured with a fresh token, either manually or via automation before it expires.
-
-When you're automating token refresh, use [this management API operation](/rest/api/apimanagement/current-ga/gateway/generate-token) to generate a new token. For information on managing Kubernetes secrets, see the [Kubernetes website](https://kubernetes.io/docs/concepts/configuration/secret).
-
-### Namespace
-Kubernetes [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) help with dividing a single cluster among multiple teams, projects, or applications. Namespaces provide a scope for resources and names. They can be associated with a resource quota and access control policies.
-
-The Azure portal provides commands to create self-hosted gateway resources in the **default** namespace. This namespace is automatically created, exists in every cluster, and can't be deleted.
-Consider [creating and deploying](https://www.kubernetesbyexample.com/) a self-hosted gateway into a separate namespace in production.
-
-### Number of replicas
-The minimum number of replicas suitable for production is two.
-
-By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
-
-### Container resources
-By default, the YAML file provided in the Azure portal doesn't specify container resource requests.
-
-It's impossible to reliably predict and recommend the amount of per-container CPU and memory resources and the number of replicas required for supporting a specific workload. Many factors are at play, such as:
--- Specific hardware that the cluster is running on.-- Presence and type of virtualization.-- Number and rate of concurrent client connections.-- Request rate.-- Kind and number of configured policies.-- Payload size and whether payloads are buffered or streamed.-- Backend service latency.-
-We recommend setting resource requests to two cores and 2 GiB as a starting point. Perform a load test and scale up/out or down/in based on the results.
-
-### Container image tag
-The YAML file provided in the Azure portal uses the **latest** tag. This tag always references the most recent version of the self-hosted gateway container image.
-
-Consider using a specific version tag in production to avoid unintentional upgrade to a newer version.
-
-You can [download a full list of available tags](https://mcr.microsoft.com/v2/azure-api-management/gateway/tags/list).
-
-### DNS policy
-DNS name resolution plays a critical role in a self-hosted gateway's ability to connect to dependencies in Azure and dispatch API calls to backend services.
-
-The YAML file provided in the Azure portal applies the default [ClusterFirst](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) policy. This policy causes name resolution requests not resolved by the cluster DNS to be forwarded to the upstream DNS server that's inherited from the node.
-
-To learn about name resolution in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service). Consider customizing [DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) or [DNS configuration](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config) as appropriate for your setup.
-
-### External traffic policy
-The YAML file provided in the Azure portal sets `externalTrafficPolicy` field on the [Service](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#service-v1-core) object to `Local`. This preserves caller IP address (accessible in the [request context](api-management-policy-expressions.md#ContextVariables)) and disables cross node load balancing, eliminating network hops caused by it. Be aware, that this setting might cause asymmetric distribution of traffic in deployments with unequal number of gateway pods per node.
-
-### Custom domain names and SSL certificates
-
-If you use custom domain names for the API Management endpoints, especially if you use a custom domain name for the Management endpoint, you might need to update the value of `config.service.endpoint` in the **\<gateway-name\>.yaml** file to replace the default domain name with the custom domain name. Make sure that the Management endpoint can be accessed from the pod of the self-hosted gateway in the Kubernetes cluster.
-
-In this scenario, if the SSL certificate that's used by the Management endpoint isn't signed by a well-known CA certificate, you must make sure that the CA certificate is trusted by the pod of the self-hosted gateway.
-
-### Configuration backup
-To learn about self-hosted gateway behavior in the presence of a temporary Azure connectivity outage, see [Self-hosted gateway overview](self-hosted-gateway-overview.md#connectivity-to-azure).
-
-Configure a local storage volume for the self-hosted gateway container, so it can persist a backup copy of the latest downloaded configuration. If connectivity is down, the storage volume can use the backup copy upon restart. The volume mount path must be <code>/apim/config</code>. See an example on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway/blob/master/examples/self-hosted-gateway-with-configuration-backup.yaml).
-To learn about storage in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/storage/volumes/).
-
-### Local logs and metrics
-The self-hosted gateway sends telemetry to [Azure Monitor](api-management-howto-use-azure-monitor.md) and [Azure Application Insights](api-management-howto-app-insights.md) according to configuration settings in the associated API Management service.
-When [connectivity to Azure](self-hosted-gateway-overview.md#connectivity-to-azure) is temporarily lost, the flow of telemetry to Azure is interrupted and the data is lost for the duration of the outage.
-Consider [setting up local monitoring](how-to-configure-local-metrics-logs.md) to ensure the ability to observe API traffic and prevent telemetry loss during Azure connectivity outages.
- ## Next steps * To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md). * Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
+* Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
+
+ Title: Self-hosted gateway on Kubernetes in production | Azure API Management
+description: Learn about guidance to run an API Management self-hosted gateway on Kubernetes for production workloads
++++++ Last updated : 12/17/2021++
+# Guidance for running self-hosted gateway on Kubernetes in production
+
+In order to run the self-hosted gateway in production, there are various aspects to take in to mind. For example, it should be deployed in a highly-available manner, use configuration backups to handle temporary disconnects and many more.
+
+This article provides guidance on how to run [self-hosted gateway](./self-hosted-gateway-overview.md) on Kubernetes for production workloads to ensure that it will run smoothly and reliably.
+
+## Access token
+Without a valid access token, a self-hosted gateway can't access and download configuration data from the endpoint of the associated API Management service. The access token can be valid for a maximum of 30 days. It must be regenerated, and the cluster configured with a fresh token, either manually or via automation before it expires.
+
+When you're automating token refresh, use [this management API operation](/rest/api/apimanagement/current-ga/gateway/generate-token) to generate a new token. For information on managing Kubernetes secrets, see the [Kubernetes website](https://kubernetes.io/docs/concepts/configuration/secret).
+
+## Namespace
+Kubernetes [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) help with dividing a single cluster among multiple teams, projects, or applications. Namespaces provide a scope for resources and names. They can be associated with a resource quota and access control policies.
+
+The Azure portal provides commands to create self-hosted gateway resources in the **default** namespace. This namespace is automatically created, exists in every cluster, and can't be deleted.
+Consider [creating and deploying](https://www.kubernetesbyexample.com/) a self-hosted gateway into a separate namespace in production.
+
+## Number of replicas
+The minimum number of replicas suitable for production is two.
+
+By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
+
+## Container resources
+By default, the YAML file provided in the Azure portal doesn't specify container resource requests.
+
+It's impossible to reliably predict and recommend the amount of per-container CPU and memory resources and the number of replicas required for supporting a specific workload. Many factors are at play, such as:
+
+- Specific hardware that the cluster is running on.
+- Presence and type of virtualization.
+- Number and rate of concurrent client connections.
+- Request rate.
+- Kind and number of configured policies.
+- Payload size and whether payloads are buffered or streamed.
+- Backend service latency.
+
+We recommend setting resource requests to two cores and 2 GiB as a starting point. Perform a load test and scale up/out or down/in based on the results.
+
+## Container image tag
+The YAML file provided in the Azure portal uses the **latest** tag. This tag always references the most recent version of the self-hosted gateway container image.
+
+Consider using a specific version tag in production to avoid unintentional upgrade to a newer version.
+
+You can [download a full list of available tags](https://mcr.microsoft.com/v2/azure-api-management/gateway/tags/list).
+
+> [!TIP]
+> When installing with Helm, image tagging is optimized for you. The Helm chart's application version pins the gateway to a given version and does not rely on `latest`.
+>
+> Learn more on how to [install an API Management self-hosted gateway on Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
+
+## DNS policy
+DNS name resolution plays a critical role in a self-hosted gateway's ability to connect to dependencies in Azure and dispatch API calls to backend services.
+
+The YAML file provided in the Azure portal applies the default [ClusterFirst](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) policy. This policy causes name resolution requests not resolved by the cluster DNS to be forwarded to the upstream DNS server that's inherited from the node.
+
+To learn about name resolution in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service). Consider customizing [DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) or [DNS configuration](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config) as appropriate for your setup.
+
+## External traffic policy
+The YAML file provided in the Azure portal sets `externalTrafficPolicy` field on the [Service](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#service-v1-core) object to `Local`. This preserves caller IP address (accessible in the [request context](api-management-policy-expressions.md#ContextVariables)) and disables cross node load balancing, eliminating network hops caused by it. Be aware, that this setting might cause asymmetric distribution of traffic in deployments with unequal number of gateway pods per node.
+
+## Custom domain names and SSL certificates
+
+If you use custom domain names for the API Management endpoints, especially if you use a custom domain name for the Management endpoint, you might need to update the value of `config.service.endpoint` in the **\<gateway-name\>.yaml** file to replace the default domain name with the custom domain name. Make sure that the Management endpoint can be accessed from the pod of the self-hosted gateway in the Kubernetes cluster.
+
+In this scenario, if the SSL certificate that's used by the Management endpoint isn't signed by a well-known CA certificate, you must make sure that the CA certificate is trusted by the pod of the self-hosted gateway.
+
+## Configuration backup
+
+Configure a local storage volume for the self-hosted gateway container, so it can persist a backup copy of the latest downloaded configuration. If connectivity is down, the storage volume can use the backup copy upon restart. The volume mount path must be <code>/apim/config</code>. See an example on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway/blob/master/examples/self-hosted-gateway-with-configuration-backup.yaml).
+To learn about storage in Kubernetes, see the [Kubernetes website](https://kubernetes.io/docs/concepts/storage/volumes/).
+
+> [!NOTE]
+> To learn about self-hosted gateway behavior in the presence of a temporary Azure connectivity outage, see [Self-hosted gateway overview](self-hosted-gateway-overview.md#connectivity-to-azure).
+
+## Local logs and metrics
+The self-hosted gateway sends telemetry to [Azure Monitor](api-management-howto-use-azure-monitor.md) and [Azure Application Insights](api-management-howto-app-insights.md) according to configuration settings in the associated API Management service.
+When [connectivity to Azure](self-hosted-gateway-overview.md#connectivity-to-azure) is temporarily lost, the flow of telemetry to Azure is interrupted and the data is lost for the duration of the outage.
+Consider [setting up local monitoring](how-to-configure-local-metrics-logs.md) to ensure the ability to observe API traffic and prevent telemetry loss during Azure connectivity outages.
+
+## High availability
+The self-hosted gateway is a crucial component in the infrastructure and has to be highly available. However, failure will and can happen.
+
+Consider protecting the self-hosted gateway against [disruption](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/).
+
+> [!TIP]
+> When installing with Helm, easily enable high available scheduling by enabling the `highAvailability.enabled` configuration option.
+>
+> Learn more on how to [install an API Management self-hosted gateway on Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
+
+### Protecting against node failure
+To prevent being affected due to data center or node failures, consider using a Kubernetes cluster that uses availability zones to achieve high availability on the node-level.
+
+Availability zones allow you to schedule the self-hosted gateway's pod on nodes spread across the zones by using:
+- [Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) (Recommended - Kubernetes v1.19+)
+- [Pod Anti-Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/)
+
+> [!Note]
+> If you are using Azure Kubernetes Service, learn how to use availability zones in [this article](./../aks/availability-zones.md).
+
+### Protecting against pod disruption
+
+Pods can experience disruption due to [various](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) reasons such as manual pod deletion, node maintenance, etc.
+
+Consider using [Pod Disruption Budgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) to enforce a minimum number of pods to be available at any given time.
+
+## Next steps
+
+* To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
+* Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md).
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/self-hosted-gateway-overview.md
When connectivity is restored, each self-hosted gateway affected by the outage w
## Next steps - [Read a whitepaper for additional background on this topic](https://aka.ms/hybrid-and-multi-cloud-api-management)-- [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)-- [Deploy self-hosted gateway to Kubernetes with YAML](how-to-deploy-self-hosted-gateway-kubernetes.md)-- [Deploy self-hosted gateway to Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md)-- [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
+- Review the guidance on [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
+- Learn [how to deploy API Management self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)
+- Learn [how to deploy API Management self-hosted gateway to Kubernetes with YAML](how-to-deploy-self-hosted-gateway-kubernetes.md)
+- Learn [how to deploy API Management self-hosted gateway to Kubernetes with Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md)
+- Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md)
api-management Visual Studio Code Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/visual-studio-code-tutorial.md
Title: Tutorial - Import and manage APIs - Azure API Management and Visual Studio Code | Microsoft Docs
-description: In this tutorial, learn how to use the Azure API Management Extension for Visual Studio Code to import, test, and manage APIs.
+description: Learn how to use the Azure API Management Extension for Visual Studio Code to import, test, and manage APIs.
Previously updated : 12/10/2020 Last updated : 12/27/2021+ # Tutorial: Use the API Management Extension for Visual Studio Code to import and manage APIs
You learn how to:
> * Apply API Management policies > * Test the API - :::image type="content" source="media/visual-studio-code-tutorial/tutorial-api-result.png" alt-text="API in API Management Extension":::
-For an introduction to additional API Management features, see the API Management tutorials using the [Azure portal](import-and-publish.md).
+For an introduction to more API Management features, see the API Management tutorials using the [Azure portal](import-and-publish.md).
## Prerequisites-- Understand [Azure API Management terminology](api-management-terminology.md)-- Ensure you have installed [Visual Studio Code](https://code.visualstudio.com/) and the latest [Azure API Management Extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-apimanagement&ssr=false#overview)-- [Create an API Management instance](vscode-create-service-instance.md)+
+* Understand [Azure API Management terminology](api-management-terminology.md).
+* Ensure you've installed [Visual Studio Code](https://code.visualstudio.com/) and the latest [Azure API Management Extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-apimanagement&ssr=false#overview).
+* [Create an API Management instance](vscode-create-service-instance.md).
## Import an API
The following example imports an OpenAPI Specification in JSON format into API M
1. In Visual Studio Code, select the Azure icon from the Activity Bar. 1. In the Explorer pane, expand the API Management instance you created.
-1. Right-click **APIs**, and select **Import from OpenAPI Link**.
+1. Right-click **APIs**, and select **Import from OpenAPI Link**.
1. When prompted, enter the following values:
- 1. An **OpenAPI link** for content in JSON format. For this example: *https://conferenceapi.azurewebsites.net?format=json*.
+ 1. An **OpenAPI link** for content in JSON format. For this example: *<https://conferenceapi.azurewebsites.net?format=json>*.
This URL is the service that implements the example API. API Management forwards requests to this address. 1. An **API name**, such as *demo-conference-api*, that is unique in the API Management instance. This name can contain only letters, number, and hyphens. The first and last characters must be alphanumeric. This name is used in the path to call the API.
After the API is imported successfully, it appears in the Explorer pane, and ava
## Edit the API
-You can edit the API in Visual Studio Code. For example, edit the Resource Manager JSON description of the API in the editor window to remove the **http** protocol used to access the API. Then select **File** > **Save**.
+You can edit the API in Visual Studio Code. For example, edit the Resource Manager JSON description of the API in the editor window to remove the **http** protocol used to access the API.
:::image type="content" source="media/visual-studio-code-tutorial/import-demo-api.png" alt-text="Edit JSON description"::: To edit the OpenAPI format, right-click the API name in the Explorer pane and select **Edit OpenAPI**. Make your changes, and then select **File** > **Save**.
-## Apply policies to the API
+## Apply policies to the API
-API Management provides [policies](api-management-policies.md) you can configure for your APIs. Policies are a collection of statements that are executed sequentially on the request or response of an API. Policies can be global, which apply to all APIs in your API Management instance, or they can be scoped to a specific API or API operation.
+API Management provides [policies](api-management-policies.md) that you can configure for your APIs. Policies are a collection of statements. These statements are run sequentially on the request or response of an API. Policies can be global, which apply to all APIs in your API Management instance, or specific to a product, an API, or an API operation.
-This section shows how to apply some common outbound policies to your API that transform the API response. The policies in this example change response headers and hide original backend URLs that appear in the response body.
+This section shows how to apply common outbound policies to your API that transform the API response. The policies in this example change response headers and hide original backend URLs that appear in the response body.
-1. In the Explorer pane, select **Policy** under the *demo-conference-api* that you imported. The policy file opens in the editor window. This file configures policies for all operations in the API.
+1. In the Explorer pane, select **Policy** under the *demo-conference-api* that you imported. The policy file opens in the editor window. This file configures policies for all operations in the API.
1. Update the file with the following content in the `<outbound>` element: ```html
This section shows how to apply some common outbound policies to your API that t
* The first `set-header` policy adds a custom response header for demonstration purposes. * The second `set-header` policy deletes the **X-Powered-By** header, if it exists. This header can reveal the application framework used in the API backend, and publishers often remove it. * The `redirect-content-urls` policy rewrites (masks) links in the response body so that they point to the equivalent links via the API Management gateway.
-
-1. Save the file. If you are prompted, select **Upload** to upload the file to the cloud.
+
+1. Save the file. If you're prompted, select **Upload** to upload the file to the cloud.
## Test the API ### Get the subscription key
-To test the imported API you imported and the policies that are applied, you need a subscription key for your API Management instance.
+You need a subscription key for your API Management instance to test the imported API and the policies that are applied.
1. In the Explorer pane, right-click the name of your API Management instance. 1. Select **Copy Subscription Key**.
- :::image type="content" source="media/visual-studio-code-tutorial/copy-subscription-key.png" alt-text="Copy subscription key":::
+ :::image type="content" source="media/visual-studio-code-tutorial/copy-subscription-key-1.png" alt-text="Copy subscription key":::
### Test an API operation 1. In the Explorer pane, expand the **Operations** node under the *demo-conference-api* that you imported. 1. Select an operation such as *GetSpeakers*, and then right-click the operation and select **Test Operation**. 1. In the editor window, next to **Ocp-Apim-Subscription-Key**, replace `{{SubscriptionKey}}` with the subscription key that you copied.
-1. Select **Send request**.
+1. Select **Send request**.
:::image type="content" source="media/visual-studio-code-tutorial/test-api.png" alt-text="Send API request from Visual Studio Code":::
When the request succeeds, the backend responds with **200 OK** and some data.
:::image type="content" source="media/visual-studio-code-tutorial/test-api-policies.png" alt-text="API test operation"::: Notice the following details in the response:+ * The **Custom** header is added to the response. * The **X-Powered-By** header doesn't appear in the response. * URLs to the API backend are redirected to the API Management gateway, in this case `https://apim-hello-world.azure-api.net/demo-conference-api`. ### Trace the API operation
-For detailed tracing information to help you debug the API operation, select the link that appears next to **Ocp-APIM-Trace-Location**.
+For detailed tracing information to help you debug the API operation, select the link that appears next to **Ocp-APIM-Trace-Location**.
-The JSON file at that location contains Inbound, Backend, and Outbound trace information so you can determine where any problems occur after the request is made.
+The JSON file at that location contains Inbound, Backend, and Outbound trace information. The trace information helps you determine where problems occur after the request is made.
> [!TIP] > When you test API operations, the API Management Extension allows optional [policy debugging](api-management-debug-policies.md) (available in the Developer service tier).
When no longer needed, remove the API Management instance by right-clicking and
Alternately, you can select **Delete API Management** to only delete the API Management instance (this operation doesn't delete its resource group). ## Next steps
-This tutorial introduced several features of the API Management Extension for Visual Studio Code that you can use to import and manage APIs. You learned how to:
+This tutorial introduced several features of the API Management Extension for Visual Studio Code. You can use these features to import and manage APIs. You learned how to:
> [!div class="checklist"] > * Import an API into API Management
This tutorial introduced several features of the API Management Extension for Vi
> * Apply API Management policies > * Test the API
-The API Management Extension provides additional features to work with your APIs. For example, [debug polices](api-management-debug-policies.md) (available in the Developer service tier), or create and manage [named values](api-management-howto-properties.md).
+The API Management Extension provides more features to work with your APIs. For example, [debug polices](api-management-debug-policies.md) (available in the Developer service tier), or create and manage [named values](api-management-howto-properties.md).
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-key-vault-references.md
If a version is not specified in the reference, then the app will use the latest
Key Vault references can be used as values for [Application Settings](configure-common.md#configure-app-settings), allowing you to keep secrets in Key Vault instead of the site config. Application Settings are securely encrypted at rest, but if you need secret management capabilities, they should go into Key Vault.
-To use a Key Vault reference for an [application setting](configure-common.md#add-or-edit), set the reference as the value of the setting. Your app can reference the secret through its key as normal. No code changes are required.
+To use a Key Vault reference for an [app setting](configure-common.md#configure-app-settings), set the reference as the value of the setting. Your app can reference the secret through its key as normal. No code changes are required.
> [!TIP] > Most application settings using Key Vault references should be marked as slot settings, as you should have separate vaults for each environment.
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-common.md
Title: Configure apps in the portal
-description: Learn to configure common settings for an App Service app in the Azure portal. App settings, app config, connection strings, platform, language stack, container, etc.
+ Title: Configure apps
+description: Learn to configure common settings for an App Service app. App settings, app config, connection strings, platform, language stack, container, etc.
keywords: azure app service, web app, app settings, environment variables ms.assetid: 9af8a367-7d39-4399-9941-b80cbc5f39a0
Last updated 12/07/2020
-# Configure an App Service app in the Azure portal
+# Configure an App Service app
-This article explains how to configure common settings for web apps, mobile back end, or API app using the [Azure portal].
+This article explains how to configure common settings for web apps, mobile back end, or API app.
## Configure app settings
-In App Service, app settings are variables passed as environment variables to the application code. For Linux apps and custom containers, App Service passes app settings to the container using the `--env` flag to set the environment variable in the container. In either case, they're injected into your app environment at app startup. When you add, remove, or edit app settings, App Service triggers an app restart. App setting names can't contain periods (`.`). If an app setting contains a period, the period is replaced with an underscore in the container.
-
-In the [Azure portal], search for and select **App Services**, and then select your app.
-
-![Search for App Services](./media/configure-common/search-for-app-services.png)
-
-In the app's left menu, select **Configuration** > **Application settings**.
-
-![Application Settings](./media/configure-common/open-ui.png)
+In App Service, app settings are variables passed as environment variables to the application code. For Linux apps and custom containers, App Service passes app settings to the container using the `--env` flag to set the environment variable in the container. In either case, they're injected into your app environment at app startup. When you add, remove, or edit app settings, App Service triggers an app restart.
For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in `<appSettings>` in *Web.config* or *appsettings.json*, but the values in App Service override the ones in *Web.config* or *appsettings.json*. You can keep development settings (for example, local MySQL password) in *Web.config* or *appsettings.json* and production secrets (for example, Azure MySQL database password) safely in App Service. The same code uses your development settings when you debug locally, and it uses your production secrets when deployed to Azure.
App settings are always encrypted when stored (encrypted-at-rest).
> [!NOTE] > App settings can also be resolved from [Key Vault](../key-vault/index.yml) using [Key Vault references](app-service-key-vault-references.md).
-### Show hidden values
+# [Azure portal](#tab/portal)
-By default, values for app settings are hidden in the portal for security. To see a hidden value of an app setting, click the **Value** field of that setting. To see the values of all app settings, click the **Show value** button.
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
-### Add or edit
+ ![Search for App Services](./media/configure-common/search-for-app-services.png)
-To add a new app setting, click **New application setting**. In the dialog, you can [stick the setting to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
+1. In the app's left menu, select **Configuration** > **Application settings**.
-To edit a setting, click the **Edit** button on the right side.
+ ![Application Settings](./media/configure-common/open-ui.png)
-When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+ By default, values for app settings are hidden in the portal for security. To see a hidden value of an app setting, click its **Value** field. To see the hidden values of all app settings, click the **Show value** button.
-> [!NOTE]
-> In a default Linux app service or a custom Linux container, any nested JSON key structure in the app setting name like `ApplicationInsights:InstrumentationKey` needs to be configured in App Service as `ApplicationInsights__InstrumentationKey` for the key name. In other words, any `:` should be replaced by `__` (double underscore).
->
+1. To add a new app setting, click **New application setting**. To edit a setting, click the **Edit** button on the right side.
+
+1. In the dialog, you can [stick the setting to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
+
+ App setting names can't contain periods (`.`). If an app setting contains a period, the period is replaced with an underscore in the container.
+
+ > [!NOTE]
+ > In a default Linux app service or a custom Linux container, any nested JSON key structure in the app setting name like `ApplicationInsights:InstrumentationKey` needs to be configured in App Service as `ApplicationInsights__InstrumentationKey` for the key name. In other words, any `:` should be replaced by `__` (double underscore).
+ >
+
+1. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+
+# [Azure CLI](#tab/cli)
+
+Add or edit an app setting with [az webapp config app settings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set):
+
+```azurecli-interactive
+az webapp config appsettings set --name <app-name> --resource-group <group-name> --settings <setting-name>="<value>"
+```
+
+Replace `<setting-name>` with the name of the setting, and `<value>` with the value to assign to it.
+
+Show all settings and their values with [az webapp config appsettings list](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_list):
+
+```azurecli-interactive
+az webapp config appsettings list --name <app-name> --resource-group <group-name>
+```
+
+Remove one or more settings with [az webapp config app settings delete](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_delete):
+
+```azurecli-interactive
+az webapp config appsettings delete --name <app-name> --resource-group <group-name> --setting-names {<setting-name1>,<setting-name2>,...}
+```
-### Edit in bulk
+# [Azure PowerShell](#tab/ps)
-To add or edit app settings in bulk, click the **Advanced edit** button. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+Set one or more app settings with [Set-AzWebApp](/powershell/module/az.websites/set-azwebapp):
+
+```azurepowershell-interactive
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"<setting-name1>"="<value1>"; "<setting-name2>"="<value2>";...}
+```
+
+This cmdlet replaces the entire set of app settings with the ones you specify. To add or edit an app setting within an existing set, include the existing app settings in your input Hashtable by using the [Get-AzWebApp](/powershell/module/az.websites/get-azwebapp) cmdlet. For example:
+
+```azurepowershell-interactive
+# Get app configuration
+$webapp=Get-AzWebApp -ResourceGroupName <group-name> -Name <app-name>
+
+# Copy app settings to a new Hashtable
+$appSettings = @{}
+ForEach ($item in $webapp.SiteConfig.AppSettings) {
+$appSettings[$item.Name] = $item.Value
+}
+
+# Add or edit one or more app settings
+$appSettings['<setting-name1>'] = '<value1>'
+$appSettings['<setting-name2>'] = '<value2>'
+
+# Save changes
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings $appSettings
+```
+
+To check if an app setting is slot-specific, use [Get-AzWebAppSlotConfigName](/powershell/module/az.websites/get-azwebappslotconfigname):
+
+```azurepowershell-interactive
+Get-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> | select AppSettingNames
+```
+
+To make one or more app settings slot-specific, use [Set-AzWebAppSlotConfigName](/powershell/module/az.websites/set-azwebappslotconfigname):
+
+```azurepowershell-interactive
+Set-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> -AppSettingNames <setting-name1>,<setting-name2>,...
+```
+
+--
+
+### Edit app settings in bulk
+
+# [Azure portal](#tab/portal)
+
+Click the **Advanced edit** button. Edit the settings in the text area. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
App settings have the following JSON formatting:
App settings have the following JSON formatting:
] ```
-### Automate app settings with the Azure CLI
+# [Azure CLI](#tab/cli)
-You can use the Azure CLI to create and manage settings from the command line.
+Run [az webapp config app settings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) with the name of the JSON file.
-- Assign a value to a setting with [az webapp config app settings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set):
+```azurecli-interactive
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings "@fileName.json"
+```
- ```azurecli-interactive
- az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings <setting-name>="<value>"
- ```
-
- Replace `<setting-name>` with the name of the setting, and `<value>` with the value to assign to it. This command creates the setting if it doesn't already exist.
-
-- Show all settings and their values with [az webapp config appsettings list](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_list):
-
- ```azurecli-interactive
- az webapp config appsettings list --name <app-name> --resource-group <resource-group-name>
- ```
-
-- Remove one or more settings with [az webapp config app settings delete](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_delete):
+> [!TIP]
+> Wrapping the file name with quotes is only required in PowerShell.
+
+ The file format needed is a JSON array of settings where the slot setting field is optional. For example:
- ```azurecli-interactive
- az webapp config appsettings delete --name <app-name> --resource-group <resource-group-name> --setting-names {<names>}
- ```
+```json
+[
+ {
+ "name": "key1",
+ "slotSetting": false,
+ "value": "value1"
+ },
+ {
+ "name": "key2",
+ "value": "value2"
+ }
+]
+```
+
+For convenience, you can save existing settings into a JSON file with [az webapp config appsettings list](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_list). The following example can be run in Bash.
- Replace `<names>` with a space-separated list of setting names.
+```azurecli-interactive
+# Save the settings
+az webapp config appsettings list --name <app-name> --resource-group <group-name> > settings.json
+
+# Edit the JSON file
+...
+
+# Update the app with the JSON file
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings @settings.json
+```
+
+# [Azure PowerShell](#tab/ps)
+
+It's not possible to edit app settings in bulk by using a JSON file with Azure PowerShell.
+
+--
## Configure connection strings
Connection strings are always encrypted when stored (encrypted-at-rest).
> [!NOTE] > Connection strings can also be resolved from [Key Vault](../key-vault/index.yml) using [Key Vault references](app-service-key-vault-references.md).
-### Show hidden values
+# [Azure portal](#tab/portal)
+
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
-By default, values for connection strings are hidden in the portal for security. To see a hidden value of a connection string, just click the **Value** field of that string. To see the values of all connection strings, click the **Show value** button.
+ ![Search for App Services](./media/configure-common/search-for-app-services.png)
-### Add or edit
+1. In the app's left menu, select **Configuration** > **Application settings**.
-To add a new connection string, click **New connection string**. In the dialog, you can [stick the connection string to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
+ ![Application Settings](./media/configure-common/open-ui.png)
-To edit a setting, click the **Edit** button on the right side.
+ By default, values for connection strings are hidden in the portal for security. To see a hidden value of a connection string, click its **Value** field. To see the hidden values of all connection strings, click the **Show value** button.
-When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+1. To add a new connection string, click **New connection string**. To edit a connection string, click the **Edit** button on the right side.
-### Edit in bulk
+1. In the dialog, you can [stick the connection string to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
-To add or edit connection strings in bulk, click the **Advanced edit** button. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+1. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+
+# [Azure CLI](#tab/cli)
+
+Add or edit an app setting with [az webapp config connection-string set](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_set):
+
+```azurecli-interactive
+az webapp config connection-string set --name <app-name> --resource-group <group-name> --connection-string-type <type> --settings <string-name>='<value>'
+```
+
+Replace `<string-name>` with the name of the connection string, and `<value>` with the value to assign to it. For possible values of `<type>` (for example, `SQLAzure`), see the [CLI command documentation](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_set).
+
+Show all connection strings and their values with [az webapp config connection-string list](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_list):
+
+```azurecli-interactive
+az webapp config connection-string list --name <app-name> --resource-group <group-name>
+```
+
+Remove one or more connection strings with [az webapp config connection-string delete](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_delete):
+
+```azurecli-interactive
+az webapp config connection-string delete --name <app-name> --resource-group <group-name> --setting-names {<string-name1>,<string-name2>,...}
+```
+
+# [Azure PowerShell](#tab/ps)
+
+Set one or more connection strings with [Set-AzWebApp](/powershell/module/az.websites/set-azwebapp):
+
+```azurepowershell-interactive
+$PropertiesObject = @{
+ "<string-name1>" = @{
+ value="<connection-string1>";
+ type="<type>"};
+ "<string-name2>" = @{
+ value="<connection-string2>";
+ type="<type>"}
+}
+
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -ConnectionStrings $PropertiesObject
+```
+
+Each connection string contains a name (`<string-name1>`), a value (`<connection-string1>`), and the type, which is a numerical value that corresponds to one of the [ConnectionStringType](/dotnet/api/microsoft.azure.management.websites.models.connectionstringtype) enum fields. For example, For SQL Azure, specify `type="2"`.
+
+This cmdlet replaces the entire set of connection strings with the ones you specify. To add or edit an app setting within an existing set, include the existing app settings in your input hashtable by using the [Get-AzWebApp](/powershell/module/az.websites/get-azwebapp) cmdlet. For example:
+
+```azurepowershell-interactive
+# Get app configuration
+$webapp=Get-AzWebApp -ResourceGroupName <group-name> -Name <app-name>
+
+# Copy connection strings to a new hashtable
+$connStrings = @{}
+ForEach ($item in $webapp.SiteConfig.ConnectionStrings) {
+$connStrings[$item.Name] = @{value=$item.Value; type=item.Type}
+}
+
+# Add or edit one or more connection strings
+$connStrings['<string-name1>'] = @{value='<connection-string1>', type='<type>'}
+$connStrings['<string-name2>'] = @{value='<connection-string2>', type='<type>'}
+
+# Save changes
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -ConnectionStrings $connStrings
+```
+
+To check if a connection string is slot-specific, use [Get-AzWebAppSlotConfigName](/powershell/module/az.websites/get-azwebappslotconfigname):
+
+```azurepowershell-interactive
+Get-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> | select ConnectionStringNames
+```
+
+To make one or more connection strings slot-specific, use [Set-AzWebAppSlotConfigName](/powershell/module/az.websites/set-azwebappslotconfigname):
+
+```azurepowershell-interactive
+Set-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> -ConnectionStringNames <string-name1>,<string-name2>,...
+```
+
+--
+
+### Edit app settings in bulk
+
+# [Azure portal](#tab/portal)
+
+Click the **Advanced edit** button. Edit the settings in the text area. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
Connection strings have the following JSON formatting:
Connection strings have the following JSON formatting:
] ```
+# [Azure CLI](#tab/cli)
+
+Run [az webapp config connection-string set](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_set) with the name of the JSON file.
+
+```azurecli-interactive
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings "@fileName.json"
+```
+
+> [!TIP]
+> Wrapping the file name with quotes is only required in PowerShell.
+
+The file format needed is a JSON array of settings where the slot setting field is optional. For example:
+
+```json
+[
+ {
+ "name": "name-1",
+ "value": "conn-string-1",
+ "type": "SQLServer",
+ "slotSetting": false
+ },
+ {
+ "name": "name-2",
+ "value": "conn-string-2",
+ "type": "PostgreSQL",
+ },
+ ...
+]
+```
+
+For convenience, you can save existing connection strings into a JSON file with [az webapp config connection-string list](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_list). The following example can be run in Bash.
+
+```azurecli-interactive
+# Save the connection strings
+az webapp config connection-string list --resource-group <group-name> --name <app-name> > settings.json
+
+# Edit the JSON file
+...
+
+# Update the app with the JSON file
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings @settings.json
+```
+
+# [Azure PowerShell](#tab/ps)
+
+It's not possible to edit connection strings in bulk by using a JSON file with Azure PowerShell.
+
+--
+ <a name="platform"></a>+
+## Configure language stack settings
+
+- [ASP.NET Core](configure-language-dotnetcore.md)
+- [Node.js](configure-language-nodejs.md)
+- [PHP](configure-language-php.md)
+- [Python](configure-language-python.md)
+- [Java](configure-language-java.md)
+- [Ruby](configure-language-ruby.md)
+ <a name="alwayson"></a> ## Configure general settings
+# [Azure portal](#tab/portal)
+ In the [Azure portal], search for and select **App Services**, and then select your app. In the app's left menu, select **Configuration** > **General settings**. ![General settings](./media/configure-common/open-general.png)
Here, you can configure some common settings for the app. Some settings require
- **Always On**: Keeps the app loaded even when there's no traffic. When **Always On** is not turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When **Always On** is turned on, the front-end load balancer sends a GET request to the application root every five minutes. The continuous ping prevents the app from being unloaded. Always On is required for continuous WebJobs or for WebJobs that are triggered using a CRON expression.
- - **Managed pipeline version**: The IIS [pipeline mode]. Set it to **Classic** if you have a legacy app that requires an older version of IIS.
- **HTTP version**: Set to **2.0** to enable support for [HTTPS/2](https://wikipedia.org/wiki/HTTP/2) protocol. > [!NOTE] > Most modern browsers support HTTP/2 protocol over TLS only, while non-encrypted traffic continues to use HTTP/1.1. To ensure that client browsers connect to your app with HTTP/2, secure your custom DNS name. For more information, see [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
Here, you can configure some common settings for the app. Some settings require
- **Debugging**: Enable remote debugging for [ASP.NET](troubleshoot-dotnet-visual-studio.md#remotedebug), [ASP.NET Core](/visualstudio/debugger/remote-debugging-azure), or [Node.js](configure-language-nodejs.md#debug-remotely) apps. This option turns off automatically after 48 hours. - **Incoming client certificates**: require client certificates in [mutual authentication](app-service-web-configure-tls-mutual-auth.md).
+# [Azure CLI](#tab/cli)
+
+You can set many of the common configurable options using [az webapp config set](/cli/azure/webapp/config#az_webapp_config_set). The following example shows a subset of the configurable options.
+
+```azurecli-interactive
+az webapp config set --resource-group <group-name> --name <app-name> --use-32bit-worker-process [true|false] --web-sockets-enabled [true|false] --always-on [true|false]--http20-enabled --auto-heal-enabled [true|false] --remote-debugging-enabled [true|false] --number-of-workers
+```
+
+To show the existing settings, use the [az webapp config show](/cli/azure/webapp/config#az_webapp_config_show) command.
+
+# [Azure PowerShell](#tab/ps)
+
+You can set many of the common configurable options using [Set-AzWebApp](/powershell/module/az.websites/set-azwebapp). The following example shows a subset of the configurable options.
+
+```azurecli-interactive
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -Use32BitWorkerProcess [True|False] -WebSocketsEnabled [True|False] -AlwaysOn [True|False] -NumberOfWorkers
+```
+
+To show the existing settings, use the [Get-AzWebApp](/powershell/module/az.websites/get-azwebapp) command.
+
+--
## Configure default documents This setting is only for Windows apps.
-In the [Azure portal], search for and select **App Services**, and then select your app. In the app's left menu, select **Configuration** > **Default documents**.
+The default document is the web page that's displayed at the root URL of an App Service app. The first matching file in the list is used. If the app uses modules that route based on URL instead of serving static content, there is no need for default documents.
-![Default documents](./media/configure-common/open-documents.png)
+# [Azure portal](#tab/portal)
-The default document is the web page that's displayed at the root URL for a website. The first matching file in the list is used. To add a new default document, click **New document**. Don't forget to click **Save**.
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
+1. In the app's left menu, select **Configuration** > **Default documents**.
-If the app uses modules that route based on URL instead of serving static content, there is no need for default documents.
+ ![Default documents](./media/configure-common/open-documents.png)
-## Configure path mappings
+1. To add a default document, click **New document**. To remove a default document, click **Delete** to its right.
-In the [Azure portal], search for and select **App Services**, and then select your app. In the app's left menu, select **Configuration** > **Path mappings**.
+# [Azure CLI](#tab/cli)
-![Path mappings](./media/configure-common/open-path.png)
+Add a default document by using [az resource update](/cli/azure/resource#az_resource_update):
-> [!NOTE]
-> The **Path mappings** tab may display OS-specific settings that differ from the example shown here.
+```azurecli-interactive
+az resource update --resource-group <group-name> --resource-type "Microsoft.Web/sites/config" --name <app-name>/config/web --add properties.defaultDocuments <filename>
+```
-### Windows apps (uncontainerized)
+# [Azure PowerShell](#tab/ps)
-For Windows apps, you can customize the IIS handler mappings and virtual applications and directories.
+Add a default document by modifying the updating app's PowerShell object:
-Handler mappings let you add custom script processors to handle requests for specific file extensions. To add a custom handler, click **New handler mapping**. Configure the handler as follows:
+```azurepowershell-interactive
+$webapp = Get-AzWebApp -ResourceGroupName <group-name> -Name <app-name>
+$webapp.SiteConfig.DefaultDocuments.Add("<filename>")
+Set-AzWebApp $webapp
+```
-- **Extension**. The file extension you want to handle, such as *\*.php* or *handler.fcgi*.-- **Script processor**. The absolute path of the script processor to you. Requests to files that match the file extension are processed by the script processor. Use the path `D:\home\site\wwwroot` to refer to your app's root directory.-- **Arguments**. Optional command-line arguments for the script processor.
+--
-<a name="redirect-to-custom-directory" aria-hidden="true"></a>
+<a name="redirect-to-a-custom-directory" aria-hidden="true"></a>
-### Redirect to a custom directory
+## Map a URL path to a directory
-> [!NOTE]
-> By default, App Service directs web requests to the root directory of your app code. But certain web frameworks don't start in the root directory. For example, [Laravel](https://laravel.com/) starts in the `public` subdirectory. To continue the `contoso.com` DNS example, such an app is accessible at `http://contoso.com/public`, but you typically want to direct `http://contoso.com` to the `public` directory instead.
+By default, App Service starts your app from the root directory of your app code. But certain web frameworks don't start in the root directory. For example, [Laravel](https://laravel.com/) starts in the `public` subdirectory. Such an app would be accessible at `http://contoso.com/public`, for example, but you typically want to direct `http://contoso.com` to the `public` directory instead. If your app's startup file is in a different folder, or if your repository has more than one application, you can edit or add virtual applications and directories.
-Each app has the default root path (`/`) mapped to `D:\home\site\wwwroot`, where your code is deployed by default. If your app root is in a different folder, or if your repository has more than one application, you can edit or add virtual applications and directories here.
+# [Azure portal](#tab/portal)
-From the **Path mappings** tab, click **New virtual application or directory**.
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
+1. In the app's left menu, select **Configuration** > **Path mappings**
+1. Click **New virtual application or directory**.
-- To map a virtual directory to a physical path, leave the **Directory** check box selected. Specify the virtual directory and the corresponding relative (physical) path to the website root (`D:\home`).-- To mark a virtual directory as a web application, clear the **Directory** check box.
-
- ![Directory check box](./media/configure-common/directory-check-box.png)
+ - To map a virtual directory to a physical path, leave the **Directory** check box selected. Specify the virtual directory and the corresponding relative (physical) path to the website root (`D:\home`).
+ - To mark a virtual directory as a web application, clear the **Directory** check box.
+
+ ![Directory check box](./media/configure-common/directory-check-box.png)
-### Containerized apps
+1. Click **OK**.
-You can [add custom storage for your containerized app](configure-connect-to-azure-storage.md). Containerized apps include all Linux apps and also the Windows and Linux custom containers running on App Service. Click **New Azure Storage Mount** and configure your custom storage as follows:
+# [Azure CLI](#tab/cli)
-- **Name**: The display name.-- **Configuration options**: **Basic** or **Advanced**.-- **Storage accounts**: The storage account with the container you want.-- **Storage type**: **Azure Blobs** or **Azure Files**.
- > [!NOTE]
- > Windows custom containers only support Azure Files.
-- **Storage container**: For basic configuration, the container you want.-- **Share name**: For advanced configuration, the file share name.-- **Access key**: For advanced configuration, the access key.-- **Mount path**: The absolute path in your container to mount the custom storage.
+The following example sets the root path `/` to the `public` subdirectory (which works for Laravel), and also adds a second virtual application at the `/app2` path. To run it, change `<group-name>` and `<app-name>`.
-For more information, see [Access Azure Storage as a network share from a container in App Service](configure-connect-to-azure-storage.md).
+```azurecli-interactive
+echo -n '[
+ {
+ "physicalPath"':' "site\\wwwroot\\public",
+ "preloadEnabled"':' false,
+ "virtualDirectories"':' null,
+ "virtualPath"':' "/"
+ },
+ {
+ "physicalPath"':' "site\\wwwroot\\app2",
+ "preloadEnabled"':' false,
+ "virtualDirectories"':' null,
+ "virtualPath"':' "/app2"
+ }
+ ]' > json.txt
-## Configure language stack settings
+json=$(cat json.txt)
-- [ASP.NET Core](configure-language-dotnetcore.md)-- [Node.js](configure-language-nodejs.md)-- [PHP](configure-language-php.md)-- [Python](configure-language-python.md)-- [Java](configure-language-java.md)-- [Ruby](configure-language-ruby.md)
+az resource update --resource-group <group-name> --resource-type Microsoft.Web/sites/config --name <app-name>/config/web --set properties.virtualApplications="$json"
+```
+
+# [Azure PowerShell](#tab/ps)
+
+The following example sets the root path `/` to the `public` subdirectory (which works for Laravel), and also adds a second virtual application at the `/app2` path. To run it, change `<group-name>` and `<app-name>`.
+
+```azurepowershell-interactive
+$webapp=Get-AzWebApp -ResourceGroupName <group-name> -Name <app-name>
+
+# Set default / path to public subdirectory
+$webapp.SiteConfig.VirtualApplications[0].PhysicalPath= "site\wwwroot\public"
+
+# Add a virtual application
+$virtualApp = New-Object Microsoft.Azure.Management.WebSites.Models.VirtualApplication
+$virtualApp.VirtualPath = "/app2"
+$virtualApp.PhysicalPath = "site\wwwroot\app2"
+$virtualApp.PreloadEnabled = $false
+$webapp.SiteConfig.VirtualApplications.Add($virtualApp)
+
+# Save settings
+Set-AzWebApp $webapp
+```
+
+--
+
+## Configure handler mappings
+
+For Windows apps, you can customize the IIS handler mappings and virtual applications and directories. Handler mappings let you add custom script processors to handle requests for specific file extensions.
+
+To add a custom handler:
+
+1. In the [Azure portal], search for and select **App Services**, and then select your app.
+1. In the app's left menu, select **Configuration** > **Path mappings**.
+
+ ![Path mappings](./media/configure-common/open-path.png)
+
+1. Click **New handler mapping**. Configure the handler as follows:
+
+ - **Extension**. The file extension you want to handle, such as *\*.php* or *handler.fcgi*.
+ - **Script processor**. The absolute path of the script processor to you. Requests to files that match the file extension are processed by the script processor. Use the path `D:\home\site\wwwroot` to refer to your app's root directory.
+ - **Arguments**. Optional command-line arguments for the script processor.
+
+1. Click **OK**.
## Configure custom containers
-See [Configure a custom Linux container for Azure App Service](configure-custom-container.md)
+- [Configure a custom container for Azure App Service](configure-custom-container.md)
+- [Add custom storage for your containerized app](configure-connect-to-azure-storage.md)
## Next steps
app-service Troubleshoot Intermittent Outbound Connection Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-intermittent-outbound-connection-errors.md
# Troubleshooting intermittent outbound connection errors in Azure App Service
-This article helps you troubleshoot intermittent connection errors and related performance issues in [Azure App Service](./overview.md). This topic will provide more information on, and troubleshooting methodologies for, exhaustion of source address network translation (SNAT) ports. If you require more help at any point in this article, contact the Azure experts at the [MSDN Azure and the Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, file an Azure support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
+This article helps you troubleshoot intermittent connection errors and related performance issues in [Azure App Service](./overview.md). This topic will provide more information on, and troubleshooting methodologies for, exhaustion of source network address translation (SNAT) ports. If you require more help at any point in this article, contact the Azure experts at the [MSDN Azure and the Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, file an Azure support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
## Symptoms
If SNAT ports are exhausted, where WebJobs are unable to connect to SQL Database
## Additional information * [SNAT with App Service](https://4lowtherabbit.github.io/blogs/2019/10/SNAT/)
-* [Troubleshoot slow app performance issues in Azure App Service](./troubleshoot-performance-degradation.md)
+* [Troubleshoot slow app performance issues in Azure App Service](./troubleshoot-performance-degradation.md)
application-gateway Application Gateway Ilb Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-ilb-arm.md
Title: Use with Internal Load Balancer - Azure Application Gateway
-description: This page provides instructions to create, configure, start, and delete an Azure application gateway with internal load balancer (ILB) for Azure Resource Manager
+description: This article provides instructions to create, configure, start, and delete an Azure application gateway with internal load balancer (ILB)
Previously updated : 11/13/2019 Last updated : 01/11/2022 # Create an application gateway with an internal load balancer (ILB)
-Azure Application Gateway can be configured with an Internet-facing VIP or with an internal endpoint that is not exposed to the Internet, also known as an internal load balancer (ILB) endpoint. Configuring the gateway with an ILB is useful for internal line-of-business applications that are not exposed to the Internet. It's also useful for services and tiers within a multi-tier application that sit in a security boundary that is not exposed to the Internet but still require round-robin load distribution, session stickiness, or Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), termination.
+Azure Application Gateway Standard v1 can be configured with an Internet-facing VIP or with an internal endpoint that is not exposed to the Internet, also known as an internal load balancer (ILB) endpoint. Configuring the gateway with an ILB is useful for internal line-of-business applications that are not exposed to the Internet. It's also useful for services and tiers within a multi-tier application that sit in a security boundary that is not exposed to the Internet but still require round-robin load distribution, session stickiness, or Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), termination.
-This article walks you through the steps to configure an application gateway with an ILB.
+This article walks you through the steps to configure a Standard v1 Application Gateway with an ILB.
## Before you begin
application-gateway Configure Application Gateway With Private Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md
Title: Configure an internal load balancer (ILB) endpoint
-description: This article provides information on how to configure Application Gateway with a private frontend IP address
+description: This article provides information on how to configure Application Gateway Standard v1 with a private frontend IP address
Previously updated : 02/23/2021 Last updated : 01/11/2022 # Configure an application gateway with an internal load balancer (ILB) endpoint
-Azure Application Gateway can be configured with an Internet-facing VIP or with an internal endpoint that isn't exposed to the Internet. An internal endpoint uses a private IP address for the frontend, which is also known as an *internal load balancer (ILB) endpoint*.
+Azure Application Gateway Standard v1 can be configured with an Internet-facing VIP or with an internal endpoint that isn't exposed to the Internet. An internal endpoint uses a private IP address for the frontend, which is also known as an *internal load balancer (ILB) endpoint*.
Configuring the gateway using a frontend private IP address is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers within a multi-tier application that are in a security boundary that isn't exposed to the Internet but:
Configuring the gateway using a frontend private IP address is useful for intern
- session stickiness - or Transport Layer Security (TLS) termination (previously known as Secure Sockets Layer (SSL)).
-This article guides you through the steps to configure an application gateway with a frontend private IP address using the Azure portal.
+This article guides you through the steps to configure a Standard v1 Application Gateway with an ILB using the Azure portal.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
Let's get started:
1. Select **Review + Create**.
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-two.png" alt-text="Still image showing the correct values for creating Form Recognizer resource.":::
+ :::image type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-two.png" alt-text="Still image showing the correct values for creating Form Recognizer resource.":::
1. Azure will run a quick validation check, after a few seconds you should see a green banner that says **Validation Passed**.
Let's get started:
1. After you select create, you'll be redirected to a new page that says **Deployment in progress**. After a few seconds, you'll see a message that says, **Your deployment is complete**. + ## Get Endpoint URL and API keys 1. Once you receive the *deployment is complete* message, select the **Go to resource** button.
- :::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-three.gif" alt-text="Gif showing the validation process of creating Form Recognizer resource.":::
- 1. Copy the key and endpoint values from your Form Recognizer resource paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API. 1. If your overview page does not have the keys and endpoint visible, you can select the **Keys and Endpoint** button on the left navigation bar and retrieve them there.
applied-ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/tutorial-logic-apps.md
Previously updated : 01/06/2022 Last updated : 01/11/2022 recommendations: false #Customer intent: As a form-processing software developer, I want to learn how to use the Form Recognizer service with Logic Apps.
applied-ai-services Security How To Update Role Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/security-how-to-update-role-assignment.md
If you do NOT do this, nothing will break. The old role will continue to functio
Any new Immersive Reader resources you create with our script at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) will automatically use the new role.
-## Call to Action
+## Call to action
If you created and configured an Immersive Reader resource using the instructions at [How to: Create an Immersive Reader resource](./how-to-create-immersive-reader.md) prior to February 2022, it is advised that you perform the operation below to update the role assignment permissions on ALL of your Immersive Reader resources. The operation involves running a script to update the role assignment on a single resource. If you have multiple resources, run this script multiple times, once for each resource.
azure-cache-for-redis Cache Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-connectivity.md
Steps to check your virtual network configuration:
### Private endpoint configuration
-Steps to check your private endpoint configurtation:
+Steps to check your private endpoint configuration:
1. `Public Network Access` flag is disabled by default on creating a private endpoint. Ensure that you have set the `Public Network Access` correctly. When you have your cache in Azure portal, look under **Private Endpoint** in the Resource menu on the left for this setting. 1. If you're trying to connect to your cache private endpoint from outside your virtual network of your cache, `Public Network Access` needs to be enabled.
azure-monitor Azure Monitor Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-install.md
It is strongly recommended to update to GA+ versions listed below instead of usi
| July 2021 | <ul><li>Support for direct proxies</li><li>Support for Log Analytics gateway</li></ul> [Learn more](https://azure.microsoft.com/updates/general-availability-azure-monitor-agent-and-data-collection-rules-now-support-direct-proxies-and-log-analytics-gateway/) | 1.1.1.0 | 1.10.5.0 | | August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>1</sup> | | September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Addressed regression introduced in 1.1.3.1<sup>2</sup> for Arc Windows servers</li></ul> | 1.1.3.2 | 1.12.2.0 <sup>2</sup> |
-| December 2021 | Fixed issues impacting Linux Arc-enabled servers | N/A | 1.14.7.0 |
+| December 2021 | Fixed issues impacting Linux Arc-enabled servers | N/A | 1.14.7.0<sup>3</sup> |
-<sup>1</sup> Do not use AMA Linux version 1.10.7.0
-<sup>2</sup> Known regression where it's not working on Arc-enabled servers
+<sup>1</sup> Do not use AMA Linux version 1.10.7.0
+<sup>2</sup> Known regression where it's not working on Arc-enabled servers
+<sup>3</sup> Bug identified wherein Linux performance counters data stops flowing on restarting/rebooting the machine(s). Fix underway and will be available in next monthly version update.
## Install with Azure portal
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
By default, two data types - `Usage` and `AzureActivity` - are retained for a mi
Data types from workspace-based Application Insights resources (`AppAvailabilityResults`, `AppBrowserTimings`, `AppDependencies`, `AppExceptions`, `AppEvents`, `AppMetrics`, `AppPageViews`, `AppPerformanceCounters`, `AppRequests`, `AppSystemEvents`, and `AppTraces`) are also retained for 90 days at no charge by default. Their retention can be adjusted using the retention by data type functionality.
-The Log Analytics [purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing and is intended to be used for very limited cases. To reduce your retention bill, the retention period must be reduced either for the workspace or for specific data types. Learn more about managing [personal data stored in Log Analytics and Application Insights](./personal-data-mgmt.md).
+> [!TIP]
+> The Log Analytics [purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing and is intended to be used for very limited cases. **To reduce your retention bill, the retention period must be reduced either for the workspace or for specific data types.** Learn more about managing [personal data stored in Log Analytics and Application Insights](./personal-data-mgmt.md).
### Retention by data type
azure-netapp-files Application Volume Group Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/application-volume-group-considerations.md
na Previously updated : 11/19/2021 Last updated : 01/11/2022 # Requirements and considerations for application volume group for SAP HANA
This article describes the requirements and considerations you need to be aware
To deploy SAP HANA volumes using the application volume group, you need to use your HANA database VMs as an anchor for a proximity placement group (PPG). ItΓÇÖs recommended that you create an availability set per database and use the **[SAP HANA VM pinning request form](https://aka.ms/HANAPINNING)** to pin the availability set to a dedicated compute cluster. After pinning, you need to add a PPG to the availability set and then deploy all hosts of an SAP HANA database using that availability set. Doing so ensures that all virtual machines are at the same location. If the virtual machines are started, the PPG has its anchor.
-> [!NOTE]
-> The PPG must be in the same resource group as the capacity pool you want to use for the SAP HANA volumes.
+> [!IMPORTANT]
+> If you have requested Azure NetApp Files SAP HANA volume pinning before the application volume group was available, you should remove the pinning for your subscription. Existing pinning for a subscription might impact the application volume group deployment and might result in a failure.
When using a PPG without a pinned availability set, a PPG would lose its anchor if all the virtual machines in that PPG are stopped. When the virtual machines are restarted, they might be started in a different location, which can result in a latency increase because the volumes created with the application volume group will not be moved.
azure-netapp-files Azure Netapp Files Create Netapp Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-netapp-account.md
You must have registered your subscription for using the NetApp Resource Provide
The NetApp account you created now appears in the Azure NetApp Files blade. > [!NOTE]
-> If you have not been granted access to the Azure NetApp Files service, you will receive the following error when you try to create the first NetApp account:
+> If you haven't registered your subscription for using the NetApp Resource Provider, you will receive the following error when you try to create the first NetApp account:
>
-> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"InvalidResourceType\",\r\n \"message\": \"The resource type could not be found in the namespace 'Microsoft.NetApp' for api version '2017-08-15'.\"\r\n }\r\n}"}]}`
+> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"InvalidResourceType\",\r\n \"message\": \"The resource type could not be found in the namespace 'Microsoft.NetApp' for api version '20xx-xx-xx'.\"\r\n }\r\n}"}]}`
## Next steps
azure-portal Azure Portal Markdown Tile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-markdown-tile.md
Title: Use a custom markdown tile on Azure dashboards description: Learn how to add a markdown tile to an Azure dashboard to display static content Previously updated : 03/19/2021 Last updated : 01/11/2022
You can add a markdown tile to your Azure dashboards to display custom, static c
## Add a markdown tile to your dashboard
-1. Select **Dashboard** from the Azure portal sidebar.
+1. Select **Dashboard** from the Azure portal menu.
- ![Screenshot showing portal sidebar](./media/azure-portal-markdown-tile/azure-portal-nav.png)
1. In the dashboard view, select the dashboard where the custom markdown tile should appear, then select **Edit**.
You can add a markdown tile to your Azure dashboards to display custom, static c
![Screenshot showing entering URL](./media/azure-portal-markdown-tile/azure-portal-dashboard-markdown-url.png) > [!NOTE]
- > For added security, you can create a markdown file and store it in an [Azure storage account blob where encryption is enabled](../storage/common/storage-service-encryption.md), then point to the file using the URL option. The markdown content is encrypted through the storage account's encryption options. Only users with permissions to the file can see the markdown content on the dashboard. You might need to set a [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) rule on the storage account so that the Azure portal (_https://portal.azure.com/_) can access the markdown file in the blob.
+ > For added security, create a markdown file and store it in an [Azure storage account blob where encryption is enabled](../storage/common/storage-service-encryption.md). For additional control, configure the encryption with [customer-managed keys stored in Azure Key Vault](/azure/storage/common/customer-managed-keys-configure-key-vault?tabs=portal). You can then point to the file using the **Insert content using URL** option. Only users with permissions to the file can see the markdown content on the dashboard. You might need to set a [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) rule on the storage account so that the Azure portal (_https://portal.azure.com/_) can access the markdown file in the blob.
1. Select **Done** to dismiss the **Edit Markdown** pane. Your content appears on the Markdown tile, which you can resize by dragging the handle in the lower right-hand corner.
You can add a markdown tile to your Azure dashboards to display custom, static c
You can use any combination of plain text, Markdown syntax, and HTML content on the markdown tile. The Azure portal uses an open-source library called _marked_ to transform your content into HTML that is shown on the tile. The HTML produced by _marked_ is pre-processed by the portal before it's rendered. This step helps make sure that your customization won't affect the security or layout of the portal. During that pre-processing, any part of the HTML that poses a potential threat is removed. The following types of content aren't allowed by the portal:
-* JavaScript ΓÇô `<script>` tags and inline JavaScript evaluations are removed.
-* iframes - `<iframe>` tags are removed.
-* Style - `<style>` tags are removed. Inline style attributes on HTML elements aren't officially supported. You may find that some inline style elements work for you, but if they interfere with the layout of the portal, they could stop working at any time. The Markdown tile is intended for basic, static content that uses the default styles of the portal.
+- JavaScript ΓÇô `<script>` tags and inline JavaScript evaluations are removed.
+- iframes - `<iframe>` tags are removed.
+- Style - `<style>` tags are removed. Inline style attributes on HTML elements aren't officially supported. You may find that some inline style elements work for you, but if they interfere with the layout of the portal, they could stop working at any time. The Markdown tile is intended for basic, static content that uses the default styles of the portal.
## Next steps
-* To create a custom dashboard, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md)
+- Learn more about [creating dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
+- Learn how to [share a dashboard by using Azure role-based access control](azure-portal-dashboard-share-access.md).
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-create-azure-support-request.md
You can get to **Help + support** in the Azure portal. It's available from the A
To create a support request, you must be an [Owner](../../role-based-access-control/built-in-roles.md#owner), [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or be assigned to the [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor) role at the subscription level. To create a support request without a subscription, for example an Azure Active Directory scenario, you must be an [Admin](../../active-directory/roles/permissions-reference.md).
+> [!IMPORTANT]
+> If a support request requires investigation into multiple subscriptions, you must have Owner, Contributor, or Support Request Contributor role for each subscription involved.
+ ### Go to Help + support from the global header To start a support request from anywhere in the Azure portal:
Follow these links to learn more:
* [Azure support ticket REST API](/rest/api/support) * Engage with us on [Twitter](https://twitter.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
-* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
+* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-cli.md
Samples for the Azure CLI are written for the `bash` shell. To run this sample i
If you don't have Azure CLI installed, you can use Azure Cloud Shell. For more information, see [Deploy Bicep files from Azure Cloud Shell](./deploy-cloud-shell.md). + ## Deployment scope You can target your deployment to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-powershell.md
You need Azure PowerShell and to be connected to Azure:
If you don't have PowerShell installed, you can use Azure Cloud Shell. For more information, see [Deploy Bicep files from Azure Cloud Shell](./deploy-cloud-shell.md). + ## Deployment scope You can target your deployment to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
The deployment commands changed in Azure CLI version 2.2.0. The examples in this
If you don't have Azure CLI installed, you can use Azure Cloud Shell. For more information, see [Deploy ARM templates from Azure Cloud Shell](deploy-cloud-shell.md). + ## Deployment scope You can target your Azure deployment template to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cloud-shell.md
You can use [Azure Cloud Shell](../../cloud-shell/overview.md) to deploy an Azur
You can deploy to any scope. This article shows deploying to a resource group. + ## Deploy remote template To deploy an external template, provide the URI of the template exactly as you would for any external deployment. The external template could be in a GitHub repository or and an external storage account.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-powershell.md
You need to install Azure PowerShell and connect to Azure:
If you don't have PowerShell installed, you can use Azure Cloud Shell. For more information, see [Deploy ARM templates from Azure Cloud Shell](deploy-cloud-shell.md). + ## Deployment scope You can target your deployment to a resource group, subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-rest.md
This article explains how to use the Azure Resource Manager REST API with Azure
You can either include your template in the request body or link to a file. When using a file, it can be a local file or an external file that is available through a URI. When your template is in a storage account, you can restrict access to the template and provide a shared access signature (SAS) token during deployment. + ## Deployment scope You can target your deployment to a resource group, Azure subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands.
azure-resource-manager Deploy To Azure Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-azure-button.md
The deployment scope is determined by the template schema. For more information,
- [management groups](deploy-to-management-group.md) - [tenants](deploy-to-tenant.md) + ## Use common image To add the button to your web page or repository, use the following image:
azure-sql Business Continuity High Availability Disaster Recover Hadr Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview.md
SQL Database and SQL Managed Instance also provide several business continuity f
- [Temporal tables](../temporal-tables.md) enable you to restore row versions from any point in time. - [Built-in automated backups](automated-backups-overview.md) and [Point in Time Restore](recovery-using-backups.md#point-in-time-restore) enables you to restore complete database to some point in time within the configured retention period up to 35 days. - You can [restore a deleted database](recovery-using-backups.md#deleted-database-restore) to the point at which it was deleted if the **server has not been deleted**.-- [Long-term backup retention](long-term-retention-overview.md) enables you to keep the backups up to 10 years. This is in limited public preview for SQL Managed Instance.
+- [Long-term backup retention](long-term-retention-overview.md) enables you to keep the backups up to 10 years.
- [Active geo-replication](active-geo-replication-overview.md) enables you to create readable replicas and manually failover to any replica in case of a datacenter outage or application upgrade. - [Auto-failover group](auto-failover-group-overview.md#terminology-and-capabilities) allows the application to automatically recover in case of a datacenter outage.
azure-sql Elastic Pool Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-overview.md
Pooled databases generally support the same [business continuity features](busin
## Creating a new SQL Database elastic pool using the Azure portal
-There are two ways you can create an elastic pool in the Azure portal.
+You can create an elastic pool in the Azure portal in two ways:
+
+- Create an elastic pool and select an existing or new server.
+- Create an elastic pool from an existing server.
+
+To create an elastic pool and select an existing or new server:
1. Go to the [Azure portal](https://portal.azure.com) to create an elastic pool. Search for and select **Azure SQL**.
-2. Select **+Add** to open the **Select SQL deployment option** page. You can view additional information about elastic pools by selecting **Show details** on the **Databases** tile.
-3. On the **Databases** tile, select **Elastic pool** in the **Resource type** dropdown, then select **Create**:
+2. Select **Create** to open the **Select SQL deployment option** pane. To view more information about elastic pools, on the **Databases** tile, select **Show details**.
+3. On the **Databases** tile, in the **Resource type** dropdown, select **Elastic pool**, and then select **Create**.
![Create an elastic pool](./media/elastic-pool-overview/create-elastic-pool.png)
-4. Or you can create an elastic pool by navigating to an existing server and clicking **+ New pool** to create a pool directly into that server.
+To create an elastic pool from an existing server:
+
+- Go to an existing server and select **New pool** to create a pool directly in that server.
> [!NOTE] > You can create multiple pools on a server, but you can't add databases from different servers into the same pool.
azure-sql Geo Distributed Application Configure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/geo-distributed-application-configure-tutorial.md
Geo-replication settings can also be changed in the Azure portal, by selecting y
private static final String READ_WRITE_URL = String.format("jdbc:" + "sqlserver://%s.database.windows.net:1433;database=%s;user=%s;password=%s;encrypt=true;" +
- "hostNameInCertificate=*.database.windows.net;loginTimeout=30;", +
+ "hostNameInCertificate=*.database.windows.net;loginTimeout=30;",
FAILOVER_GROUP_NAME, DB_NAME, USER, PASSWORD); private static final String READ_ONLY_URL = String.format("jdbc:" + "sqlserver://%s.secondary.database.windows.net:1433;database=%s;user=%s;password=%s;encrypt=true;" +
- "hostNameInCertificate=*.database.windows.net;loginTimeout=30;", +
+ "hostNameInCertificate=*.database.windows.net;loginTimeout=30;",
FAILOVER_GROUP_NAME, DB_NAME, USER, PASSWORD); public static void main(String[] args) {
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
Previously updated : 9/9/2021 Last updated : 1/10/2022 # Hyperscale service tier
Hyperscale service tier is only available in [vCore model](service-tiers-vcore.m
- **Compute**:
- The Hyperscale compute unit price is per replica. The [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) price is applied to high-availabilty and named replicas automatically. We create a primary replica and one secondary [high-availability replica](service-tier-hyperscale-replicas.md) per Hyperscale database by default. Users may adjust the total number of high-availability replicas from 0-4, depending on the needed [SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/).
+ The Hyperscale compute unit price is per replica. The [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) price is applied to high-availabilty and named replicas automatically. Users may adjust the total number of high-availability secondary replicas from 0 to 4, depending on [SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) requirements.
- **Storage**:
- You don't need to specify the max data size when configuring a Hyperscale database. In the hyperscale tier, you're charged for storage for your database based on actual allocation. Storage is automatically allocated between 40 GB and 100 TB, in 10-GB increments. Multiple data files can grow at the same time if needed. A Hyperscale database is created with a starting size of 10 GB and it starts growing by 10 GB every 10 minutes, until it reaches the size of 40 GB.
+ You don't need to specify the max data size when configuring a Hyperscale database. In the Hyperscale tier, you're charged for storage for your database based on actual allocation. Storage is automatically allocated between 40 GB and 100 TB, in 10-GB increments. Multiple data files can grow at the same time if needed. A Hyperscale database is created with a starting size of 10 GB and it starts growing by 10 GB every 10 minutes, until it reaches the size of 40 GB.
For more information about Hyperscale pricing, see [Azure SQL Database Pricing](https://azure.microsoft.com/pricing/details/sql-database/single/)
azure-video-analyzer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/connect-classic-account-to-arm.md
In this article, we will go through options on connecting your **existing** Vide
* Unlimited paid Video Analyzer for Media account (classic account).
- * To perform the connect to ARM action, you should have owner's permissions on the Video Analyzer for Media account.
+ * To perform the connect to the ARM (Azure Resource Manager) action, you should have owner's permissions on the Video Analyzer for Media account.
* Azure Subscription. * User assigned managed identity (can be created along the flow).
-#### Transition state and recommended steps before connecting a classic account to be ARM-based
+## Transition state
-In the connect process the account management is connected to ARM, which will trigger 30 days of a transition state for the account. In that state, a connected account can be accessed through API by both access token [generated through API Management](https://aka.ms/avam-dev-portal)(classic way) or by an access token generated through ARM. The transition state, which moves all account management to be managed by ARM will disable the invite user feature from the Video Analyzer for Media portal, due to the fact account-management will be handled by [Azure RBAC][docs-rbac-overview]. This will result all invited users on this account losing their access to the Video Analyzer for Media account Media portal. Of course, this can easily be resolved by assigning the right role-assignment to all these users through Azure RBAC ([How to assign RBAC][docs-rbac-assignment]). Only the account owner, who performed the connect action, will be automatically assigned as an owner on the connected account. Additionally, when [Azure policies][docs-governance-policy] are enforced, this would override the settings on this account.
-If users won't be added through Azure RBAC to the account, after 30 days, they will lose access also through API, since after the transition state ends, no user can't generate a valid access token through APIM (classic way) anymore, but only through ARM. Making Azure RBAC the exclusive way to manage role based access control on the account.
+Connecting a classic account to be ARM-based triggers a 30 days of a transition state. In the transition state, an existing account can be accessed by generating an access token using both:
+
+* Access token [generated through API Management](https://aka.ms/avam-dev-portal)(classic way)
+* Access token [generated through ARM](/rest/api/videoindexer/generate/access-token)
+
+The transition state moves all account management functionality to be managed by ARM and will be handled by [Azure RBAC][docs-rbac-overview].
+
+The [invite users](invite-users.md) feature in the Video Analyzer for Media portal gets disabled. The invited users on this account lose their access to the Video Analyzer for Media account Media in the portal.
+However, this can be resolved by assigning the right role-assignment to these users through Azure RBAC, see [How to assign RBAC][docs-rbac-assignment].
+
+Only the account owner, who performed the connect action, is automatically assigned as the owner on the connected account. When [Azure policies][docs-governance-policy] are enforced, they override the settings on the account.
+
+If users are not added through Azure RBAC to the account after 30 days, they will lose access through API as well as Video Analyzer for Media portal.
+After the transition state ends, users will only be able to generate a valid access token through through ARM, making Azure RBAC the exclusive way to manage role-based access control on the account.
> [!NOTE]
-> If there are invited users you wish to remove their access before the end of the 30 days of transition state you should do so through the account settings on the Azure Video Analyzer for Media account settings **before** connecting the account to ARM
+> If there are invited users you wish to remove access from, do it before connecting the account to ARM.
+
+Before the end of the 30 days of transition state, you can remove access from users through the Azure Video Analyzer for Media portal on the account settings page.
## Get started ### Browse to [Video Analyzer for Media portal](https://aka.ms/vi-portal-link)
-1. Sign in using your AAD account.
+1. Sign in using your Azure AD account.
1. On the top right bar press *User account* to open the side pane account list.
-1. Select the Video Analyzer for Media classic account you wish to connect to ARM (Classic accounts will be tagged with a *classic tag*).
+1. Select the Video Analyzer for Media classic account you wish to connect to ARM (classic accounts will be tagged with a *classic tag*).
1. Click **Settings**. ![account-settings](media/connect-classic-account-to-arm/user-account-settings.png)
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-build-chat.md
First let's create an empty ASP.NET Core app.
```bash dotnet new web
- dotnet add package Azure.Messaging.WebPubSub
+ dotnet add package Microsoft.Azure.WebPubSub.AspNetCore --version 1.0.0-beta.3
``` 2. Then add `app.UseStaticFiles();` before `app.UseRouting();` in `Startup.cs` to support static files. Remove the default `endpoints.MapGet` inside `app.UseEndpoints`.
First let's create an empty ASP.NET Core app.
} app.UseStaticFiles();- app.UseRouting(); app.UseEndpoints(endpoints =>
You can test the server by running `dotnet run --urls http://localhost:8080` and
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md) the subscriber uses an API in Web PubSub SDK to generate an access token from connection string and use it to connect to the service. This is usually not safe in a real world application as connection string has high privilege to do any operation to the service so you don't want to share it with any client. Let's change this access token generation process to a REST API at server side, so client can call this API to request an access token every time it needs to connect, without need to hold the connection string.
-1. Install dependencies
- Install dependencies and use [Secret Manager](/aspnet/core/security/app-secrets#secret-manager) tool for .NET Core to set the connection string. Run the below command, replacing `<connection_string>` with the one fetched in [previous step](#get-the-connectionstring-for-future-use)
+1. Install dependencies.
```bash dotnet add package Microsoft.Extensions.Azure ```
-2. DI the service client inside `ConfigureServices` and don't forget to replace `<connection_string>` with the one of your services.
+2. Add a `SampleChatHub` class to handle hub events. And DI the service middleware and service client inside `ConfigureServices()`. Don't forget to replace `<connection_string>` with the one of your services.
```csharp public void ConfigureServices(IServiceCollection services) {
- services.AddAzureClients(builder =>
+ services.AddWebPubSub(o => o.ServiceEndpoint = new ServiceEndpoint("<connection_string>"))
+ .AddWebPubSubServiceClient<SampleChatHub>();
+ }
+
+ public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
+ {
+ if (env.IsDevelopment())
+ {
+ app.UseDeveloperExceptionPage();
+ }
+
+ app.UseStaticFiles();
+ app.UseRouting();
+
+ app.UseEndpoints(endpoints =>
{
- builder.AddWebPubSubServiceClient("<connection_string>", "chat");
}); }+
+ private sealed class SampleChatHub : WebPubSubHub
+ {
+ }
```
-2. Add a `/negotiate` API to the server inside `app.UseEndpoints` to generate the token
+
+ `AddWebPubSubServiceClient<THub>()` is used to inject the service client `WebPubSubServiceClient<THub>`, with which we can use in negotiation step to generate client connection token and in hub methods to invoke service REST APIs when hub events are triggered.
+
+3. Add a `/negotiate` API to the server inside `app.UseEndpoints` to generate the token.
```csharp app.UseEndpoints(endpoints =>
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
await context.Response.WriteAsync("missing user id"); return; }
- var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
+ var serviceClient = context.RequestServices.GetRequiredService<WebPubSubServiceClient<SampleChatHub>>();
await context.Response.WriteAsync(serviceClient.GetClientAccessUri(userId: id).AbsoluteUri); }); });
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
You can test this API by running `dotnet run --urls http://localhost:8080` and accessing `http://localhost:8080/negotiate?id=<user-id>` and it will give you the full url of the Azure Web PubSub with an access token.
-3. Then update `https://docsupdatetracker.net/index.html` to include the following script to get the token from server and connect to service
+4. Then update `https://docsupdatetracker.net/index.html` to include the following script to get the token from server and connect to service.
```html <html>
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
</html> ```
- If you are using Chrome, you can test it by opening the home page, input your user name. press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
-
+ If you are using Chrome, you can test it by opening the home page, input your user name. Press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
# [JavaScript](#tab/javascript)
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
</html> ```
- If you are using Chrome, you can test it by opening the home page, input your user name. press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
+ If you are using Chrome, you can test it by opening the home page, input your user name. Press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
# [Java](#tab/java)
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
</html> ```
- If you are using Chrome, you can test it by opening the home page, input your user name. press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
+ If you are using Chrome, you can test it by opening the home page, input your user name. Press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
Events are delivered to server in the form of Webhook. Webhook is served and exp
Azure Web PubSub follows [CloudEvents](./reference-cloud-events.md) to describe the event data. # [C#](#tab/csharp)
-For now, you need to implement the event handler by your own in C#, the steps are straight forward following [the protocol spec](./reference-cloud-events.md) and illustrated below.
+Here we're using Web PubSub middleware SDK, there is already an implementation to parse and process CloudEvents schema, so we don't need to deal with these details. Instead, we can focus on the inner business logic in the hub methods.
1. Add event handlers inside `UseEndpoints`. Specify the endpoint path for the events, let's say `/eventhandler`. -
-2. First we'd like to handle the abuse protection OPTIONS requests, we check if the header contains `WebHook-Request-Origin` header, and we return the header `WebHook-Allowed-Origin`. For simplicity for demo purpose, we return `*` to allow all the origins.
```csharp app.UseEndpoints(endpoints => {
- // abuse protection
- endpoints.Map("/eventhandler/{*path}", async context =>
- {
- if (context.Request.Method == "OPTIONS")
- {
- if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
- {
- context.Response.Headers["WebHook-Allowed-Origin"] = "*";
- context.Response.StatusCode = 200;
- return;
- }
- }
- });
+ endpoints.MapWebPubSubHub<SampleChatHub>("/eventhandler/{*path}");
}); ```
-3. Then we'd like to check if the incoming requests are the events we expect. Let's say we now care about the system `connected` event, which should contain the header `ce-type` as `azure.webpubsub.sys.connected`. We add the logic after abuse protection:
+2. Go the `SampleChatHub` we created in previous step. Add a constructor to work with `WebPubSubServiceClient<SampleChatHub>` so we can use to invoke service. And override `OnConnectedAsync()` method to respond when `connected` event is triggered.
```csharp
- app.UseEndpoints(endpoints =>
+ private sealed class SampleChatHub : WebPubSubHub
{
- // abuse protection
- endpoints.Map("/eventhandler/{*path}", async context =>
+ private readonly WebPubSubServiceClient<SampleChatHub> _serviceClient;
+
+ public SampleChatHub(WebPubSubServiceClient<SampleChatHub> serviceClient)
{
- if (context.Request.Method == "OPTIONS")
- {
- if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
- {
- context.Response.Headers["WebHook-Allowed-Origin"] = "*";
- context.Response.StatusCode = 200;
- return;
- }
- }
- else if (context.Request.Method == "POST")
- {
- // get the userId from header
- var userId = context.Request.Headers["ce-userId"];
- if (context.Request.Headers["ce-type"] == "azure.webpubsub.sys.connected")
- {
- // the connected event
- Console.WriteLine($"{userId} connected");
- context.Response.StatusCode = 200;
- return;
- }
- }
- });
- });
+ _serviceClient = serviceClient;
+ }
+
+ public override async Task OnConnectedAsync(ConnectedEventRequest request)
+ {
+ await _serviceClient.SendToAllAsync($"[SYSTEM] {request.ConnectionContext.UserId} joined.");
+ }
+ }
```
-In the above code, we simply print a message to console when a client is connected. You can see we use `context.Request.Headers["ce-userId"]` so we can see the identity of the connected client.
+In the above code, we use the service client to broadcast a notification message to all of whom is joined.
# [JavaScript](#tab/javascript)
Besides system events like `connected` or `disconnected`, client can also send m
# [C#](#tab/csharp)
-The `ce-type` of `message` event is always `azure.webpubsub.user.message`, details see [Event message](./reference-cloud-events.md#message).
+Implement the `OnMessageReceivedAsync()` method in `SampleChatHub`.
-1. Handle message event
+1. Handle message event.
```csharp
- app.UseEndpoints(endpoints =>
+ private sealed class SampleChatHub : WebPubSubHub
{
- // abuse protection
- endpoints.Map("/eventhandler/{*path}", async context =>
+ private readonly WebPubSubServiceClient<SampleChatHub> _serviceClient;
+
+ public SampleChatHub(WebPubSubServiceClient<SampleChatHub> serviceClient)
{
- var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
- if (context.Request.Method == "OPTIONS")
- {
- if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
- {
- context.Response.Headers["WebHook-Allowed-Origin"] = "*";
- context.Response.StatusCode = 200;
- return;
- }
- }
- else if (context.Request.Method == "POST")
- {
- // get the userId from header
- var userId = context.Request.Headers["ce-userId"];
- if (context.Request.Headers["ce-type"] == "azure.webpubsub.sys.connected")
- {
- // the connected event
- Console.WriteLine($"{userId} connected");
- context.Response.StatusCode = 200;
- return;
- }
- else if (context.Request.Headers["ce-type"] == "azure.webpubsub.user.message")
- {
- using var stream = new StreamReader(context.Request.Body);
- await serviceClient.SendToAllAsync($"[{userId}] {await stream.ReadToEndAsync()}");
- context.Response.StatusCode = 200;
- return;
- }
- }
- });
- });
+ _serviceClient = serviceClient;
+ }
+
+ public override async Task OnConnectedAsync(ConnectedEventRequest request)
+ {
+ await _serviceClient.SendToAllAsync($"[SYSTEM] {request.ConnectionContext.UserId} joined.");
+ }
+
+ public override async ValueTask<UserEventResponse> OnMessageReceivedAsync(UserEventRequest request, CancellationToken cancellationToken)
+ {
+ await _serviceClient.SendToAllAsync($"[{request.ConnectionContext.UserId}] {request.Data}");
+
+ return request.CreateResponse($"[SYSTEM] ack."));
+ }
+ }
```
- This event handler uses `WebPubSubServiceClient.SendToAllAsync()` to broadcast the received message to all clients.
+ This event handler uses `WebPubSubServiceClient.SendToAllAsync()` to broadcast the received message to all clients. You can see in the end we returned `UserEventResponse`, which contains a message directly to the caller and make the WebHook request success. If you have extra logic to validate and would like to break this call, you can throw an exception here. The middleware will deliver the exception message to service and service will drop current client connection.
2. Update `https://docsupdatetracker.net/index.html` to add the logic to send message from user to server and display received messages in the page.
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
You can see in the above code we use `WebSocket.send()` to send message and `WebSocket.onmessage` to listen to message from service.
-3. Finally update the `onConnected` handler to broadcast the connected event to all clients so they can see who joined the chat room.
-
- ```csharp
- app.UseEndpoints(endpoints =>
- {
- var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
- // abuse protection
- endpoints.Map("/eventhandler/{*path}", async context =>
- {
- if (context.Request.Method == "OPTIONS")
- {
- if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
- {
- context.Response.Headers["WebHook-Allowed-Origin"] = "*";
- context.Response.StatusCode = 200;
- return;
- }
- }
- else if (context.Request.Method == "POST")
- {
- // get the userId from header
- var userId = context.Request.Headers["ce-userId"];
- if (context.Request.Headers["ce-type"] == "azure.webpubsub.sys.connected")
- {
- // the connected event
- Console.WriteLine($"{userId} connected");
- await serviceClient.SendToAllAsync($"[SYSTEM] {userId} joined.");
- context.Response.StatusCode = 200;
- return;
- }
- }
- });
- });
- ```
- Now run the server using `dotnet run --urls http://localhost:8080` and open multiple browser instances to access http://localhost:8080/https://docsupdatetracker.net/index.html, then you can chat with each other. The complete code sample of this tutorial can be found [here][code-csharp].
backup Quick Backup Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/quick-backup-vm-portal.md
Title: Quickstart - Back up a VM with the Azure portal description: In this Quickstart, learn how to create a Recovery Services vault, enable protection on an Azure VM, and backup the VM, with the Azure portal. Previously updated : 05/12/2020 Last updated : 01/11/2022 ms.devlang: azurecli +++ # Back up a virtual machine in Azure
This quickstart enables backup on an existing Azure VM. If you need to create a
Sign in to the [Azure portal](https://portal.azure.com). ++
+## Apply a backup policy
+
+To apply a backup policy to your Azure VMs, follow these steps:
+
+1. Go to **Backup center** and click **+Backup** from the **Overview** tab.
+
+ ![Screenshot showing the Backup button.](./media/backup-azure-arm-vms-prepare/backup-button.png)
+
+1. Select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then click **Continue**.
+
+ ![Screenshot showing Backup and Backup Goal panes.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
+
+1. Assign a Backup policy.
+
+ - The default policy backs up the VM once a day. The daily backups are retained for _30 days_. Instant recovery snapshots are retained for two days.
+
+ ![Screenshot showing the default backup policy.](./media/backup-azure-arm-vms-prepare/default-policy.png)
+
+ - If you don't want to use the default policy, select **Create New**, and create a custom policy as described in the next procedure.
++ ## Select a VM to back up Create a simple scheduled daily backup to a Recovery Services vault.
-1. In the menu on the left, select **Virtual machines**.
-2. From the list, choose a VM to back up. If you used the sample VM quickstart commands, the VM is named *myVM* in the *myResourceGroup* resource group.
-3. In the **Operations** section, choose **Backup**. The **Enable backup** window opens.
+1. Under **Virtual Machines**, select **Add**.
+
+ ![Screenshot showing to add virtual machines.](./media/backup-azure-arm-vms-prepare/add-virtual-machines.png)
+
+1. The **Select virtual machines** pane will open. Select the VMs you want to back up using the policy. Then select **OK**.
+
+ * The selected VMs are validated.
+ * You can only select VMs in the same region as the vault.
+ * VMs can only be backed up in a single vault.
+
+ ![Screenshot showing the Select virtual machines pane.](./media/backup-azure-arm-vms-prepare/select-vms-to-backup.png)
+
+ >[!NOTE]
+ > All the VMs in the same region and subscription as that of the vault are available to configure backup. When configuring backup, you can browse to the virtual machine name and its resource group, even though you donΓÇÖt have the required permission on those VMs. If your VM is in soft deleted state, then it won't be visible in this list. If you need to re-protect the VM, then you need to wait for the soft delete period to expire or undelete the VM from the soft deleted list. For more information, see [the soft delete for VMs article](soft-delete-virtual-machines.md#soft-delete-for-vms-using-azure-portal).
## Enable backup on a VM A Recovery Services vault is a logical container that stores the backup data for each protected resource, such as Azure VMs. When the backup job for a protected resource runs, it creates a recovery point inside the Recovery Services vault. You can then use one of these recovery points to restore data to a given point in time.
-1. Select **Create new** and provide a name for the new vault, such as *myRecoveryServicesVault*.
-2. If not already selected, choose **Use existing**, then select the resource group of your VM from the drop-down menu.
+To enable VM backup, in **Backup**, select **Enable backup**. This deploys the policy to the vault and to the VMs, and installs the backup extension on the VM agent running on the Azure VM.
- ![Enable VM backup in the Azure portal](./media/quick-backup-vm-portal/enable-backup.png)
+After enabling backup:
- By default, the vault is set for Geo-Redundant storage. To further protect your data, this storage redundancy level ensures that your backup data is replicated to a secondary Azure region that's hundreds of miles away from the primary region.
+- The Backup service installs the backup extension whether or not the VM is running.
+- An initial backup will run in accordance with your backup schedule.
+- When backups run, note that:
+ - A VM that's running has the greatest chance for capturing an application-consistent recovery point.
+ - However, even if the VM is turned off, it's backed up. Such a VM is known as an offline VM. In this case, the recovery point will be crash-consistent.
+- Explicit outbound connectivity isn't required to allow backup of Azure VMs.
- You create and use policies to define when a backup job runs and how long the recovery points are stored. The default protection policy runs a backup job each day and retains recovery points for 30 days. You can use these default policy values to quickly protect your VM.
+### Create a custom policy
-3. To accept the default backup policy values, select **Enable Backup**.
+If you selected to create a new backup policy, fill in the policy settings.
-It takes a few moments to create the Recovery Services vault.
+1. In **Policy name**, specify a meaningful name.
+2. In **Backup schedule**, specify when backups should be taken. You can take daily or weekly backups for Azure VMs.
+3. In **Instant Restore**, specify how long you want to retain snapshots locally for instant restore.
+ * When you restore, backed up VM disks are copied from storage, across the network to the recovery storage location. With instant restore, you can leverage locally stored snapshots taken during a backup job, without waiting for backup data to be transferred to the vault.
+ * You can retain snapshots for instant restore for between one to five days. The default value is two days.
+4. In **Retention range**, specify how long you want to keep your daily or weekly backup points.
+5. In **Retention of monthly backup point** and **Retention of yearly backup point**, specify whether you want to keep a monthly or yearly backup of your daily or weekly backups.
+6. Select **OK** to save the policy.
+ > [!NOTE]
+ > To store the restore point collection (RPC), the Backup service creates a separate resource group (RG). This RG is different than RG of the VM. [Learn more](backup-during-vm-creation.md#azure-backup-resource-group-for-virtual-machines).
-## Start a backup job
+ ![Screenshot showing the new backup policy.](./media/backup-azure-arm-vms-prepare/new-policy.png)
-You can start a backup now rather than wait for the default policy to run the job at the scheduled time. This first backup job creates a full recovery point. Each backup job after this initial backup creates incremental recovery points. Incremental recovery points are storage and time-efficient, as they only transfer changes made since the last backup.
+> [!NOTE]
+ > Azure Backup doesn't support automatic clock adjustment for daylight-saving changes for Azure VM backups. As time changes occur, modify backup policies manually as required.
-1. On the **Backup** window for your VM, select **Backup now**.
+## Start a backup job
- ![Perform immediate VM backup in the Azure portal](./media/quick-backup-vm-portal/backup-now.png)
+The initial backup will run in accordance with the schedule, but you can run it immediately as follows:
-2. To accept the backup retention policy of 30 days, leave the default **Retain Backup Till** date. To start the job, select **Backup**.
+1. Go to **Backup center** and select the **Backup Instances** menu item.
+1. Select **Azure Virtual machines** as the **Datasource type**. Then search for the VM that you have configured for backup.
+1. Right-click the relevant row or select the more icon (…), and then click **Backup Now**.
+1. In **Backup Now**, use the calendar control to select the last day that the recovery point should be retained. Then select **OK**.
+1. Monitor the portal notifications.
+ To monitor the job progress, go to **Backup center** > **Backup Jobs** and filter the list for **In progress** jobs.
+ Depending on the size of your VM, creating the initial backup may take a while.
## Monitor the backup job
-In the **Backup** window for your VM, the status of the backup and number of completed restore points are shown. Once the VM backup job is complete, information on the **Last backup time**, **Latest restore point**, and **Oldest restore point** is shown on the right-hand side of the **Overview** window.
+The Backup job details for each VM backup consist of two phases, the **Snapshot** phase followed by the **Transfer data to vault** phase.
+
+The snapshot phase guarantees the availability of a recovery point stored along with the disks for **Instant Restores** and are available for a maximum of five days depending on the snapshot retention configured by the user. Transfer data to vault creates a recovery point in the vault for long-term retention. Transfer data to vault only starts after the snapshot phase is completed.
+
+ ![Screenshot showing the backup job status.](./media/backup-azure-arm-vms-prepare/backup-job-status.png)
+
+There are two **Sub Tasks** running at the backend, one for front-end backup job that can be checked from the **Backup Job** details pane as given below:
+
+ ![Screenshot showing backup job status sub-tasks.](./media/backup-azure-arm-vms-prepare/backup-job-phase.png)
+
+The **Transfer data to vault** phase can take multiple days to complete depending on the size of the disks, churn per disk and several other factors.
+
+Job status can vary depending on the following scenarios:
+
+**Snapshot** | **Transfer data to vault** | **Job Status**
+ | |
+Completed | In progress | In progress
+Completed | Skipped | Completed
+Completed | Completed | Completed
+Completed | Failed | Completed with warning
+Failed | Failed | Failed
+
+Now with this capability, for the same VM, two backups can run in parallel, but in either phase (snapshot, transfer data to vault) only one sub task can be running. So in scenarios where a backup job in progress resulted in the next dayΓÇÖs backup to fail, it will be avoided with this decoupling functionality. Subsequent days' backups can have the snapshot completed, while **Transfer data to vault** is skipped if an earlier dayΓÇÖs backup job is in progress state.
+The incremental recovery point created in the vault will capture all the churn from the most recent recovery point created in the vault. There's no cost impact on the user.
+
+## Optional steps
+
+### Install the VM agent
+
+Azure Backup backs up Azure VMs by installing an extension to the Azure VM agent running on the machine. If your VM was created from an Azure Marketplace image, the agent is installed and running. If you create a custom VM, or you migrate an on-premises machine, you might need to install the agent manually, as summarized in the table.
+
+**VM** | **Details**
+ |
+**Windows** | 1. [Download and install](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409) the agent MSI file. <br><br> 2. Install with admin permissions on the machine. <br><br> 3. Verify the installation. In *C:\WindowsAzure\Packages* on the VM, right-click **WaAppAgent.exe** > **Properties**. On the **Details** tab, **Product Version** should be 2.6.1198.718 or higher. <br><br> If you're updating the agent, make sure that no backup operations are running, and [reinstall the agent](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409).
+**Linux** | Install by using an RPM or a DEB package from your distribution's package repository. This is the preferred method for installing and upgrading the Azure Linux agent. All the [endorsed distribution providers](../virtual-machines/linux/endorsed-distros.md) integrate the Azure Linux agent package into their images and repositories. The agent is available on [GitHub](https://github.com/Azure/WALinuxAgent), but we don't recommend installing from there. <br><br> If you're updating the agent, make sure no backup operations are running, and update the binaries.</li><ul>
## Clean up deployment
If you're going to continue on to a Backup tutorial that explains how to restore
2. Choose **Stop backup**.
- ![Stop VM backup from the Azure portal](./media/quick-backup-vm-portal/stop-backup.png)
+ ![Screenshot showing to stop VM backup from the Azure portal.](./media/quick-backup-vm-portal/stop-backup.png)
3. Select **Delete Backup Data** from the drop-down menu.
If you're going to continue on to a Backup tutorial that explains how to restore
6. From the list, choose your resource group. If you used the sample VM quickstart commands, the resource group is named *myResourceGroup*. 7. Select **Delete resource group**. To confirm, enter the resource group name, then select **Delete**.
- ![Delete the resource group from the Azure portal](./media/quick-backup-vm-portal/delete-resource-group-from-portal.png)
+ ![Screenshot showing to delete the resource group from the Azure portal.](./media/quick-backup-vm-portal/delete-resource-group-from-portal.png)
## Next steps
backup Tutorial Backup Vm At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-backup-vm-at-scale.md
Title: Tutorial - Back up multiple Azure virtual machines description: In this tutorial, learn how to create a Recovery Services vault, define a backup policy, and simultaneously back up multiple virtual machines. Previously updated : 07/26/2020 Last updated : 01/11/2022 +++ # Use Azure portal to back up multiple virtual machines
When you back up data in Azure, you store that data in an Azure resource called
Sign in to the [Azure portal](https://portal.azure.com/).
-## Create a Recovery Services vault
-The Recovery Services vault contains the backup data, and the backup policy applied to the protected virtual machines. Backing up virtual machines is a local process. You can't back up a virtual machine from one location to a Recovery Services vault in another location. So, for each Azure location that has virtual machines to be backed up, at least one Recovery Services vault must exist in that location.
-
-1. On the left menu, select **All services**.
-
- ![Select All services](./media/tutorial-backup-vm-at-scale/click-all-services.png)
-
-1. In the **All services** dialog box, enter *Recovery Services*. The list of resources filters according to your input. In the list of resources, select **Recovery Services vaults**.
-
- ![Enter and choose Recovery Services vaults](./media/tutorial-backup-vm-at-scale/all-services.png)
-
- The list of Recovery Services vaults in the subscription appears.
-
-1. On the **Recovery Services vaults** dashboard, select **Add**.
-
- ![Add a Recovery Services vault](./media/tutorial-backup-vm-at-scale/add-button-create-vault.png)
-
-1. In the Recovery Services vault menu,
-
- * Type *myRecoveryServicesVault* in **Name**.
- * The current subscription ID appears in **Subscription**. If you have additional subscriptions, you can choose another subscription for the new vault.
- * For **Resource group**, select **Use existing** and choose *myResourceGroup*. If *myResourceGroup* doesn't exist, select **Create new** and type *myResourceGroup*.
- * From the **Location** drop-down menu, choose *West Europe*.
-
- ![Recovery Services vault values](./media/tutorial-backup-vm-at-scale/review-and-create.png)
-
- A Recovery Services vault must be in the same location as the virtual machines being protected. If you have virtual machines in multiple regions, create a Recovery Services vault in each region. This tutorial creates a Recovery Services vault in *West Europe* because that's where *myVM* (the virtual machine created with the quickstart) was created.
-
-1. When you're ready to create the Recovery Services vault, select **Create**.
-
- ![Create the Recovery Services vault](./media/tutorial-backup-vm-at-scale/click-create-button.png)
-
-1. It can take a while to create the Recovery Services vault. Monitor the status notifications in the **Notifications** area at the upper-right corner of the portal. After your vault is created, it's visible in the list of Recovery Services vaults. If you don't see your vault, select **Refresh**.
-
- ![Refresh the list of backup vaults](./media/tutorial-backup-vm-at-scale/refresh-button.png)
When you create a Recovery Services vault, by default the vault has geo-redundant storage. To provide data resiliency, geo-redundant storage replicates the data multiple times across two Azure regions.
When you create a Recovery Services vault, by default the vault has geo-redundan
After creating the Recovery Services vault, the next step is to configure the vault for the type of data, and to set the backup policy. Backup policy is the schedule for how often and when recovery points are taken. Policy also includes the retention range for the recovery points. For this tutorial, let's assume your business is a sports complex with a hotel, stadium, and restaurants and concessions, and you're protecting the data on the virtual machines. The following steps create a backup policy for the financial data.
-1. From the list of Recovery Services vaults, select **myRecoveryServicesVault** to open its dashboard.
+To set a backup policy to your Azure VMs, follow these steps:
- ![Open Scenario menu](./media/tutorial-backup-vm-at-scale/open-vault-from-list.png)
+1. Go to **Backup center** and click **+Backup** from the **Overview** tab.
-1. On the vault dashboard menu, select **Backup** to open the Backup menu.
+ ![Screenshot showing the Backup button.](./media/backup-azure-arm-vms-prepare/backup-button.png)
-1. On the Backup Goal menu, in the **Where is your workload running** drop-down menu, choose *Azure*. From the **What do you want to backup** drop-down, choose *Virtual machine*, and select **Backup**.
+1. Select **Azure Virtual machines** as the **Datasource type** and select the vault you have created. Then click **Continue**.
- These actions prepare the Recovery Services vault for interacting with a virtual machine. Recovery Services vaults have a default policy that creates a restore point each day, and retains the restore points for 30 days.
+ ![Screenshot showing the Backup and Backup Goal panes.](./media/backup-azure-arm-vms-prepare/select-backup-goal-1.png)
- ![Backup goal](./media/tutorial-backup-vm-at-scale/backup-goal.png)
+1. Assign a Backup policy.
-1. To create a new policy, on the Backup policy menu, from the **Choose backup policy** drop-down menu, select *Create a new policy*.
+ - The default policy backs up the VM once a day. The daily backups are retained for 30 days. Instant recovery snapshots are retained for two days.
- ![Create new policy](./media/tutorial-backup-vm-at-scale/create-new-policy.png)
+ ![Screenshot showing the default backup policy.](./media/backup-azure-arm-vms-prepare/default-policy.png)
-1. The **Backup policy** pane will open. Fill out the following details:
- * For **Policy Name** type *Finance*. Enter the following changes for the Backup policy:
- * For **Backup frequency** set the timezone for *Central Time*. Since the sports complex is in Texas, the owner wants the timing to be local. Leave the backup frequency set to Daily at 3:30AM.
- * For **Retention of daily backup point**, set the period to 90 days.
- * For **Retention of weekly backup point**, use the *Monday* restore point and retain it for 52 weeks.
- * For **Retention of monthly backup point**, use the restore point from First Sunday of the month, and retain it for 36 months.
- * Deselect the **Retention of yearly backup point** option. The leader of Finance doesn't want to keep data longer than 36 months.
- * Select **OK** to create the backup policy.
+ - If you don't want to use the default policy, select **Create New**, and create a custom policy as described in the next procedure.
- ![Backup policy settings](./media/tutorial-backup-vm-at-scale/set-new-policy.png)
+1. Under **Virtual Machines**, select **Add**.
- After creating the backup policy, associate the policy with the virtual machines.
+ ![Screenshot showing to add virtual machines.](./media/backup-azure-arm-vms-prepare/add-virtual-machines.png)
-1. Under **Virtual Machines**, select **Add**.
+1. The **Select virtual machines** pane will open. Select the VMs you want to back up using the policy. Then select **OK**.
- ![Add virtual machines](./media/tutorial-backup-vm-at-scale/add-virtual-machines.png)
+ * The selected VMs are validated.
+ * You can only select VMs in the same region as the vault.
+ * VMs can only be backed up in a single vault.
-1. The **Select virtual machines** pane will open. Select *myVM* and select **OK** to deploy the backup policy to the virtual machines.
+ ![Screenshot showing the Select virtual machines pane.](./media/backup-azure-arm-vms-prepare/select-vms-to-backup.png)
- All virtual machines that are in the same location, and aren't already associated with a backup policy, appear. *myVMH1* and *myVMR1* are selected to be associated with the *Finance* policy.
+ >[!NOTE]
+ > All the VMs in the same region and subscription as that of the vault are available to configure backup. When configuring backup, you can browse to the virtual machine name and its resource group, even though you donΓÇÖt have the required permission on those VMs. If your VM is in soft deleted state, then it won't be visible in this list. If you need to re-protect the VM, then you need to wait for the soft delete period to expire or undelete the VM from the soft deleted list. For more information, see [the soft delete for VMs article](soft-delete-virtual-machines.md#soft-delete-for-vms-using-azure-portal).
- ![Choose VMs to protect](./media/tutorial-backup-vm-at-scale/choose-vm-to-protect.png)
+1. In **Backup**, select **Enable backup**. This deploys the policy to the vault and to the VMs, and installs the backup extension on the VM agent running on the Azure VM.
-1. After the virtual machines have been chosen, select **Enable Backup**.
+After enabling backup:
- When the deployment completes, you'll receive a notification that deployment successfully completed.
+* The Backup service installs the backup extension whether or not the VM is running.
+* An initial backup will run in accordance with your backup schedule.
+* When backups run, note that:
+ * A VM that's running has the greatest chance for capturing an application-consistent recovery point.
+ * However, even if the VM is turned off, it's backed up. Such a VM is known as an offline VM. In this case, the recovery point will be crash-consistent.
+* Explicit outbound connectivity isn't required to allow backup of Azure VMs.
## Initial backup You've enabled backup for the Recovery Services vaults, but an initial backup hasn't been created. It's a disaster recovery best practice to trigger the first backup, so that your data is protected.
-To run an on-demand backup job:
-
-1. On the vault dashboard, select **3** under **Backup Items**, to open the Backup Items menu.
-
- ![Backup items](./media/tutorial-backup-vm-at-scale/tutorial-vm-back-up-now.png)
-
- The **Backup Items** menu opens.
-
-1. On the **Backup Items** menu, select **Azure Virtual Machine** to open the list of virtual machines associated with the vault.
-
- ![List of virtual machines](./media/tutorial-backup-vm-at-scale/three-virtual-machines.png)
-
-1. The **Backup Items** list opens.
-
- ![Backup job triggered](./media/tutorial-backup-vm-at-scale/initial-backup-context-menu.png)
-
-1. On the **Backup Items** list, select the ellipses **...** to open the Context menu.
-
-1. On the Context menu, select **Backup now**.
-
- ![Context menu - select Backup now](./media/tutorial-backup-vm-at-scale/context-menu.png)
-
- The Backup Now menu opens.
-
-1. On the Backup Now menu, enter the last day to retain the recovery point, and select **OK**.
-
- ![Set the last day the Backup Now recovery point is retained](./media/tutorial-backup-vm-at-scale/backup-now-short.png)
-
- Deployment notifications let you know the backup job has been triggered, and that you can monitor the progress of the job on the Backup jobs page. Depending on the size of your virtual machine, creating the initial backup may take a while.
-
- When the initial backup job completes, you can see its status in the Backup job menu. The on-demand backup job created the initial restore point for *myVM*. If you want to back up other virtual machines, repeat these steps for each virtual machine.
+The initial backup will run in accordance with the schedule, but you can run it immediately as follows:
- ![Backup Jobs tile](./media/tutorial-backup-vm-at-scale/initial-backup-complete.png)
+1. Go to **Backup center** and select the **Backup Instances** menu item.
+1. Select **Azure Virtual machines** as the **Datasource type**. Then search for the VM that you have configured for backup.
+1. Right-click the relevant row or select the more icon (…), and then click **Backup Now**.
+1. In **Backup Now**, use the calendar control to select the last day that the recovery point should be retained. Then select **OK**.
+1. Monitor the portal notifications.
+ To monitor the job progress, go to **Backup center** > **Backup Jobs** and filter the list for **In progress** jobs.
+ Depending on the size of your VM, creating the initial backup may take a while.
## Clean up resources
If you plan to continue on to work with subsequent tutorials, don't clean up the
1. On the **myRecoveryServicesVault** dashboard, select **3** under **Backup Items** to open the Backup Items menu.
- ![Open Backup items menu](./media/tutorial-backup-vm-at-scale/tutorial-vm-back-up-now.png)
+ ![Screenshot showing to open Backup items menu.](./media/tutorial-backup-vm-at-scale/tutorial-vm-back-up-now.png)
1. On the **Backup Items** menu, select **Azure Virtual Machine** to open the list of virtual machines associated with the vault.
- ![List of virtual machines](./media/tutorial-backup-vm-at-scale/three-virtual-machines.png)
+ ![Screenshot showing the list of virtual machines.](./media/tutorial-backup-vm-at-scale/three-virtual-machines.png)
The **Backup Items** list opens. 1. In the **Backup Items** menu, select the ellipsis to open the Context menu.
- ![In Backup Items menu, open Context menu](./media/tutorial-backup-vm-at-scale/context-menu-to-delete-vm.png)
+ ![Screenshot showing to open Context menu in Backup Items menu.](./media/tutorial-backup-vm-at-scale/context-menu-to-delete-vm.png)
1. On the context menu, select **Stop backup** to open Stop Backup menu.
- ![Stop backup menu](./media/tutorial-backup-vm-at-scale/context-menu-for-delete.png)
+ ![Screenshot showing the Stop backup menu.](./media/tutorial-backup-vm-at-scale/context-menu-for-delete.png)
1. In the **Stop Backup** menu, select the upper drop-down menu and choose **Delete Backup Data**.
If you plan to continue on to work with subsequent tutorials, don't clean up the
1. Once the backup item is verified (a check mark appears), **Stop backup** button is enabled. Select **Stop Backup** to stop the policy and delete the restore points.
- ![Select Stop backup to delete vault](./media/tutorial-backup-vm-at-scale/provide-reason-for-delete.png)
+ ![Screenshot showing to select Stop backup to delete vault.](./media/tutorial-backup-vm-at-scale/provide-reason-for-delete.png)
>[!NOTE] >Deleted items are retained in the soft delete state for 14 days. Only after that period can the vault be deleted. For more information, see [Delete an Azure Backup Recovery Services vault](backup-azure-delete-vault.md). 1. When there are no more items in the vault, select **Delete**.
- ![Select delete](./media/tutorial-backup-vm-at-scale/deleting-the-vault.png)
+ ![Screenshot showing to select delete.](./media/tutorial-backup-vm-at-scale/deleting-the-vault.png)
Once the vault is deleted, you'll return to the list of Recovery Services vaults.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
description: This article lists the Microsoft Security Response Center updates a
documentationcenter: na-+ editor: '' ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 na Previously updated : 11/19/2021- Last updated : 1/11/2022+ # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## December 2021 Guest OS
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 21-12 | [5008218] | Latest Cumulative Update(LCU) | [6.38] | Dec 14, 2021 |
+| Rel 21-12 | [5006671] | IE Cumulative Updates | [2.117], [3.104], [4.97] | Dec 14, 2021 |
+| Rel 21-12 | [5008223] | Latest Cumulative Update(LCU) | [7.6] | Dec 14, 2021 |
+| Rel 21-12 | [5008207] | Latest Cumulative Update(LCU) | [5.62] | Dec 14, 2021 |
+| Rel 21-12 | [4578952] | .NET Framework 3.5 Security and Quality Rollup | [2.117] | Feb 16, 2021 |
+| Rel 21-12 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup | [2.117] | Jun 8, 2021 |
+| Rel 21-12 | [4578953] | .NET Framework 3.5 Security and Quality Rollup | [4.97] | Feb 16, 2021 |
+| Rel 21-12 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup | [4.97] | Feb 16, 2021 |
+| Rel 21-12 | [4578950  ] | .NET Framework 3.5 Security and Quality Rollup | [3.104] | Feb 16, 2021 |
+| Rel 21-12 | [4578954 ] | . NET Framework 4.5.2 Security and Quality Rollup | [3.104] | Feb 16, 2021 |
+| Rel 21-12 | [5004335 ] | . NET Framework 3.5 and 4.7.2 Cumulative Update | [6.38] | Aug 10, 2021 |
+| Rel 21-12 | [5008244] | Monthly Rollup | [2.117] | Sep 14, 2021 |
+| Rel 21-12 | [5008277] | Monthly Rollup | [3.104] | Sep 14, 2021 |
+| Rel 21-12 | [5008263] | Monthly Rollup | [4.97] | Sep 14, 2021 |
+| Rel 21-12 | [5001401 ] | Servicing Stack update | [3.104] | Apr 13, 2021 |
+| Rel 21-12 | [5001403 ] | Servicing Stack update | [4.97] | Apr 13, 2021 |
+| Rel 21-12 OOB | [4578013] | Standalone Security Update | [4.97] | Aug 19, 2020 |
+| Rel 21-12 | [5005698] | Servicing Stack update | [5.62] | Sep 14, 2021 |
+| Rel 21-12 | [5006749] | Servicing Stack update | [2.117] | July 13, 2021 |
+| Rel 21-12 | [5008287 ] | Servicing Stack update | [6.38] | Aug 10, 2021 |
+| Rel 21-12 | [4494175 ] | Microcode | [5.62] | Sep 1, 2020 |
+| Rel 21-12 | [4494174] | Microcode | [6.38] | Sep 1, 2020 |
+
+[5008218]: https://support.microsoft.com/kb/5008218
+[5006671]: https://support.microsoft.com/kb/5006671
+[5008223]: https://support.microsoft.com/kb/5008223
+[5008207]: https://support.microsoft.com/kb/5008207
+[4578952]: https://support.microsoft.com/kb/4578952
+[4578955]: https://support.microsoft.com/kb/4578955
+[4578953]: https://support.microsoft.com/kb/4578953
+[4578956]: https://support.microsoft.com/kb/4578956
+[4578950  ]: https://support.microsoft.com/kb/4578950  
+[4578954 ]: https://support.microsoft.com/kb/4578954 
+[5004335 ]: https://support.microsoft.com/kb/5004335 
+[5008244]: https://support.microsoft.com/kb/5008244
+[5008277]: https://support.microsoft.com/kb/5008277
+[5008263]: https://support.microsoft.com/kb/5008263
+[5001401 ]: https://support.microsoft.com/kb/5001401 
+[5001403 ]: https://support.microsoft.com/kb/5001403 
+[4578013]: https://support.microsoft.com/kb/4578013
+[5005698]: https://support.microsoft.com/kb/5005698
+[5006749]: https://support.microsoft.com/kb/5006749
+[5008287 ]: https://support.microsoft.com/kb/5008287 
+[4494175 ]: https://support.microsoft.com/kb/4494175 
+[4494174]: https://support.microsoft.com/kb/4494174
+[2.117]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.104]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.97]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.62]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.38]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.6]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## November 2021 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
The following tables show the Microsoft Security Response Center (MSRC) updates
## October 2021 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 21-10 | [5006672] | Latest Cumulative Update(LCU) | 6.36 | Oct 12 , 2021 |
-| Rel 21-10 | [5006671] | IE Cumulative Updates | 2.115, 3.102, 4.95 | Oct 12 , 2021 |
+| Rel 21-10 | [5006672] | Latest Cumulative Update(LCU) | [6.36] | Oct 12 , 2021 |
+| Rel 21-10 | [5006671] | IE Cumulative Updates | [2.115], [3.102], [4.95] | Oct 12 , 2021 |
| Rel 21-10 | [5006699] | IE Cumulative Updates | 7.00 | Oct 12 , 2021 |
-| Rel 21-10 | [5006669] | Latest Cumulative Update(LCU) | 5.60 | Oct 12 , 2021 |
-| Rel 21-10 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | 2.115 | Feb 16, 2021 |
-| Rel 21-10 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | 2.115 | Jun 8, 2021 |
-| Rel 21-10 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | 4.95 | Feb 16, 2021 |
-| Rel 21-10 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | 4.95 | Feb 16, 2021 |
-| Rel 21-10 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | 3.102 | Feb 16, 2021 |
-| Rel 21-10 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | 3.102 | Feb 16, 2021 |
-| Rel 21-10 | [5004335] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | 6.36 | Aug 10, 2021 |
-| Rel 21-10 | [5006743] | Monthly Rollup  | 2.115 | Sep 14, 2021 |
-| Rel 21-10 | [5006739] | Monthly Rollup  | 3.102 | Sep 14, 2021 |
-| Rel 21-10 | [5006714] | Monthly Rollup  | 4.95 | Sep 14, 2021 |
-| Rel 21-10 | [5001401] | Servicing Stack update  | 3.102 | Apr 13, 2021 |
-| Rel 21-10 | [5001403] | Servicing Stack update  | 4.95 | Apr 13, 2021 |
-| Rel 21-10 OOB | [4578013] | Standalone Security Update  | 4.95 | Aug 19, 2020 |
-| Rel 21-10 | [5005698] | Servicing Stack update  | 5.60 | Sep 14, 2021 |
-| Rel 21-10 | [5006749] | Servicing Stack update  | 2.115 | July 13, 2021 |
-| Rel 21-10 | [4494175] | Microcode  | 5.60 | Sep 1, 2020 |
-| Rel 21-10 | [4494174] | Microcode  | 6.36 | Sep 1, 2020 |
+| Rel 21-10 | [5006669] | Latest Cumulative Update(LCU) | [5.60] | Oct 12 , 2021 |
+| Rel 21-10 | [4578952] | .NET Framework 3.5 Security and Quality Rollup  | [2.115] | Feb 16, 2021 |
+| Rel 21-10 | [4578955] | .NET Framework 4.5.2 Security and Quality Rollup  | [2.115] | Jun 8, 2021 |
+| Rel 21-10 | [4578953] | .NET Framework 3.5 Security and Quality Rollup  | [4.95] | Feb 16, 2021 |
+| Rel 21-10 | [4578956] | .NET Framework 4.5.2 Security and Quality Rollup  | [4.95] | Feb 16, 2021 |
+| Rel 21-10 | [4578950] | .NET Framework 3.5 Security and Quality Rollup  | [3.102] | Feb 16, 2021 |
+| Rel 21-10 | [4578954] | . NET Framework 4.5.2 Security and Quality Rollup  | [3.102] | Feb 16, 2021 |
+| Rel 21-10 | [5004335] | . NET Framework 3.5 and 4.7.2 Cumulative Update  | [6.36] | Aug 10, 2021 |
+| Rel 21-10 | [5006743] | Monthly Rollup  | [2.115] | Sep 14, 2021 |
+| Rel 21-10 | [5006739] | Monthly Rollup  | [3.102] | Sep 14, 2021 |
+| Rel 21-10 | [5006714] | Monthly Rollup  | [4.95] | Sep 14, 2021 |
+| Rel 21-10 | [5001401] | Servicing Stack update  | [3.102] | Apr 13, 2021 |
+| Rel 21-10 | [5001403] | Servicing Stack update  | [4.95] | Apr 13, 2021 |
+| Rel 21-10 OOB | [4578013] | Standalone Security Update  | [4.95] | Aug 19, 2020 |
+| Rel 21-10 | [5005698] | Servicing Stack update  | [5.60] | Sep 14, 2021 |
+| Rel 21-10 | [5006749] | Servicing Stack update  | [2.115] | July 13, 2021 |
+| Rel 21-10 | [4494175] | Microcode  | [5.60] | Sep 1, 2020 |
+| Rel 21-10 | [4494174] | Microcode  | [6.36] | Sep 1, 2020 |
[5006672]: https://support.microsoft.com/kb/5006672 [5006671]: https://support.microsoft.com/kb/5006671
The following tables show the Microsoft Security Response Center (MSRC) updates
[5006749]: https://support.microsoft.com/kb/5006749 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174-
+[2.115]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.102]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.95]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.60]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.36]: ./cloud-services-guestos-update-matrix.md#family-6-releases
## September 2021 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-update-matrix.md
description: The latest release news and SDK compatibility for Azure Cloud Servi
documentationcenter: na-+ editor: '' ms.assetid: 6306cafe-1153-44c7-8554-623b03d59a34 na Previously updated : 11/19/2021- Last updated : 1/10/2022+ # Azure Guest OS releases and SDK compatibility matrix Provides you with up-to-date information about the latest Azure Guest OS releases for Cloud Services. This information helps you plan your upgrade path before a Guest OS is disabled. If you configure your roles to use *automatic* Guest OS updates as described in [Azure Guest OS Update Settings][Azure Guest OS Update Settings], it is not vital that you read this page.
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **January 10, 2022**
+The December Guest OS has released.
+ ###### **November 19, 2021** The November Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
-| WA-GUEST-OS-7.5_202111-01 | November 19, 2021 | Post 7.52 |
+| WA-GUEST-OS-7.6_202112-01 | January 10, 2022 | Post 7.8 |
+| WA-GUEST-OS-7.5_202111-01 | November 19, 2021 | Post 7.7 |
## Family 6 releases
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.38_202112-01 | January 10, 2022 | Post 6.40 |
| WA-GUEST-OS-6.37_202111-01 | November 19, 2021 | Post 6.39 |
-| WA-GUEST-OS-6.36_202110-01 | November 1, 2021 | Post 6.38 |
+|~~WA-GUEST-OS-6.36_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-6.35_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-6.34_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-6.33_202107-01~~| August 13, 2021 | October 8, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.62_202112-01 | January 10, 2022 | Post 5.64 |
| WA-GUEST-OS-5.61_202111-01 | November 19, 2021 | Post 5.63 |
-| WA-GUEST-OS-5.60_202110-01 | November 1, 2021 | Post 5.62 |
+|~~WA-GUEST-OS-5.60_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-5.59_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-5.58_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-5.57_202107-01~~| August 13, 2021 | October 8, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.97_202112-01 | January 10 , 2022 | Post 4.99 |
| WA-GUEST-OS-4.96_202111-01 | November 19, 2021 | Post 4.98 |
-| WA-GUEST-OS-4.95_202110-01 | November 1, 2021 | Post 4.97 |
+|~~WA-GUEST-OS-4.95_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-4.94_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-4.93_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-4.92_202107-01~~| August 13, 2021 | October 8, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.104_202112-01 | January 10, 2022 | Post 3.106 |
| WA-GUEST-OS-3.103_202111-01 | November 19, 2021 | Post 3.105 |
-| WA-GUEST-OS-3.102_202110-01 | November 1, 2021 | Post 3.104 |
+|~~WA-GUEST-OS-3.102_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-3.101_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-3.100_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-3.99_202107-01~~| August 13, 2021 | October 8, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.117_202112-01 | January 10, 2022 | Post 2.119 |
| WA-GUEST-OS-2.116_202111-01 | November 19, 2021 | Post 2.118 |
-| WA-GUEST-OS-2.115_202110-01 | November 1, 2021 | Post 2.117 |
+|~~WA-GUEST-OS-2.115_202110-01~~| November 1, 2021 | January 10, 2022 |
|~~WA-GUEST-OS-2.114_202109-01~~| October 8, 2021 | November 19, 2021 | |~~WA-GUEST-OS-2.113_202108-01~~| August 27, 2021 | November 1, 2021 | |~~WA-GUEST-OS-2.112_202107-01~~| August 13, 2021 | October 8, 2021 |
cognitive-services Overview Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview-multivariate.md
Title: What is the Anomaly Detector Multivariate API?
+ Title: What is Multivariate Anomaly Detector?
description: Overview of new Anomaly Detector preview multivariate APIs.
Last updated 04/01/2021 + keywords: anomaly detection, machine learning, algorithms
-# Multivariate time series Anomaly Detection (preview)
+# What is Multivariate Anomaly Detector? (Public Preview)
The new **multivariate anomaly detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
To run the Notebook, you should get a valid Anomaly Detector API **subscription
## Region support
-The preview of Multivariate Anomaly Detector is currently available in 26 Azure regions: Southeast Asia, Australia East, Canada Central, North Europe, West Europe, East US, East US 2, South Central US, West US, West US 2, West US 3, UK South, Japan East, Brazil South, Central US, Norway East, Korea Central, East Asia, Germany West Central, Central India, UAE North, France Central, North Central US, Switzerland North, South Africa North, Jio India West.
+The preview of Multivariate Anomaly Detector is currently available in 26 Azure regions.
+
+| Geography | Regions |
+| - | - |
+| Africa | South Africa North |
+| Asia Pacific | Southeast Asia, East Asia|
+| Australia | Australia East |
+| Brazil |Brazil South|
+|Canada | Canada Central |
+| Europe | North Europe, West Europe, Switzerland North |
+|France |France Central |
+|Germany| Germany West Central |
+|India| Jio India West, Central India |
+|Japan | Japan East |
+|Korea | Korea Central |
+|Norway | Norway East|
+|United Arab Emirates| UAE North |
+| United Kingdom | UK South |
+| United States | East US, East US 2, South Central US, West US, West US 2, West US 3, Central US, North Central US|
+++ ## Algorithms
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview.md
Title: What is the Anomaly Detector API?
+ Title: What is the Univariate Anomaly Detector?
description: Use the Anomaly Detector API's algorithms to apply anomaly detection on your time series data.
keywords: anomaly detection, machine learning, algorithms
-# What is the Anomaly Detector univariate API?
+# What is Univariate Anomaly Detector?
The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
After signing up:
1. Send a request to the Anomaly Detector API with your data. 1. Process the API response by parsing the returned JSON message. + ## Algorithms * See the following technical blogs for information about the algorithms used:
cognitive-services Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/Configure.md
- Title: Configure Review tool settings - Content Moderator-
-description: Use the Review tool to configure or retrieve your team, tags, connectors, workflows, and credentials for Content Moderator.
------- Previously updated : 10/28/2021-
-#How to set any of the configurations in the review tool.
--
-# Configure the Review tool
---
-The [Review tool](https://contentmoderator.cognitive.microsoft.com) has several important features that you can access through the **Settings** menu on the dashboard.
-
-![Content Moderator Review too settings menu](images/settings-1.png)
-
-## Manage team and subteams
-
-The **Team** tab lets you manage your team and subteams&mdash;groups of users who can be notified when certain [human reviews](../review-api.md#reviews) are started. You can only have one team (which you create when you sign up with the Review tool), but you can create multiple subteams. The team administrator can invite members, set their permissions, and assign them to different subteams.
-
-![Review tool team settings](images/settings-2-team.png)
-
-Subteams are useful for creating escalation teams or teams dedicated to reviewing specific categories of content. For example, you might send adult content to a separate team for further review.
-
-This section explains how to create subteams and quickly assign reviews on the fly. However, you can use [Workflows](workflows.md) to assign reviews based on specific criteria.
-
-### Create a subteam
-
-Go to the **Subteams** section and click **Add Subteam**. Enter your subteam name in the dialog and click **Save**.
-
-![Subteam Name](images/1-Teams-2.PNG)
-
-#### Invite teammates
-
-You cannot assign someone to a subteam if they are not already a member of the default team, so you need to add reviewers to the default team first. Click **Invite** on the **Team** tab.
-
-![Invite users](images/invite-users.png)
-
-#### Assign teammates to subteam
-
-Click the **Add Member** button to assign members from your default team to one or more subteams. You can only add existing users to a subteam. For adding new users who are not in the review tool, invite them by using the "Invite" button on the Team Settings page.
-
-![Assign subteam members](images/1-Teams-3.PNG)
-
-### Assign reviews to subteams
-
-Once you have created your subteams and assigned members, you can start assigning content [reviews](../review-api.md#reviews) to those subteams. This is done from the **Review** tab of the site.
-To assign content to a subteam, click the ellipsis in the upper-right corner, select **Move to**, and select a subteam.
-
-![Assign image review to subteam](images/3-review-image-subteam-1.png)
-
-### Switch between subteams
-
-If you are a member of more than one subteam, you can switch between those subteams to change which content reviews are displayed for you. In the **Review** tab, select the drop-down menu labeled **Default** and select **Choose Subteam**. You can view the content reviews for different subteams, but only those of which your are a member.
-
-![Switch between subteams](images/3-review-image-subteam-2.png)
-
-## Tags
-
-The **Tags** tab lets you define custom moderation tags in addition to the two default moderation tags&mdash;**isadult** (**a**) and **isracy** (**r**). When you create a custom tag, it becomes available in reviews alongside the default tags. You can change which tags show up in reviews by switching their visibility settings.
-
-![Tags view, including the "Is visible" checkboxes](images/tags-4-disable.png)
-
-### Create custom tags
-
-To create a new tag, you must enter a short code, name, and description in the respective fields.
--- **Short code**: Enter a two-letter code for your tag. Example: **cb**-- **Name**: Enter a short and descriptive tag name in lowercase without spaces. Example: **isbullying**.-- **Description**: (optional) Enter a description of the kind of content that your tag targets. Example: **Depictions or instances of cyber bullying**.-
-Click **Add** to add a tag, and click **Save** when you are finished creating tags.
-
-![Review tool create new tag dialog](images/settings-3-tags.png)
-
-### Delete tags
-
-You can delete custom tags by selecting the trash icon next to their entries on the Tags list, but you cannot delete the default tags.
-
-## Connectors
-
-The **Connectors** tab lets you manage your connectors, which are service-specific plugins that can process content in different ways as part of content [workflows](../review-api.md#workflows).
-
-The default connector when you create a workflow is the Content Moderator connector, which can mark content as **adult** or **racy**, find profanity, and so on. However, you can use other connectors, listed here, as long as you have credentials for their respective services (to use the Face connector, for example, you will need to get a [Face](../../face/overview.md) subscription key).
-
-The [Review tool](./human-in-the-loop.md) includes the following connectors:
--- Emotion-- Face-- PhotoDNA Cloud Service-- Language service-
-### Add a connector
-
-To add a connector (and make it available for use in content [workflows](../review-api.md#workflows)), select the appropriate **Connect** button. In the next dialog, enter your subscription key for that service. When you are done, your new connector should appear at the top of the page.
-
-![Content Moderator Connectors Settings](images/settings-4-connectors.png)
-
-## Workflows
-
-The **workflows** tab lets you manage your [workflows](../review-api.md#workflows). Workflows are cloud-based filters for content, and they work with connectors to sort content in different ways and take appropriate actions. Here, you can define, edit, and test your workflows. See [Define and use workflows](Workflows.md) for guidance on how to do this.
-
-![Content Moderator Workflow Settings](images/settings-5-workflows.png)
-
-## Credentials
-
-The **Credentials** tab provides quick access to your Content Moderator subscription key, which you will need to access any of the moderation services from a REST call or client SDK.
-
-![Content Moderator Credentials](images/settings-6-credentials.png)
-
-### Use external credentials for workflows
-
-The [Review tool](https://contentmoderator.cognitive.microsoft.com) generates a free trial key for Azure Content Moderator services when you sign up, but you can also configure it to use an existing key from your Azure account. This is recommended for large-scale scenarios, as free trial keys have strict usage limits ([Pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/content-moderator/)).
-
-If you have created a [Content Moderator resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator) in Azure, navigate to it in the Azure portal and select the **Keys** blade. Copy one of your keys.
-
-![Content Moderator keys in the Azure portal](images/credentials-azure-portal-keys.PNG)
-
-In the [Review tool](https://contentmoderator.cognitive.microsoft.com)'s **Credentials** tab, go to the **Workflow Settings** pane, select **Edit**, and paste your key into the **Ocp-Apim-Subscription-Key** field. Now, workflows that call the moderation APIs will use your Azure credential.
-
-> [!NOTE]
-> The other two fields in the **Workflow Settings** pane are for custom term and image lists. See the [Custom terms](../try-terms-list-api.md) or [Custom images](../try-image-list-api.md) guides to learn about these.
-
-### Use your Azure account with the review APIs
-
-To use your Azure key with the review APIs, you need to retrieve your Resource ID. Go to your Content Moderator resource in the Azure portal and select the **Properties** blade. Copy the Resource ID value and paste it into the **Whitelisted Resource Id(s)** field of the Review tool's **Credentials** tab.
-
-![Content Moderator Resource ID in the Azure portal](images/credentials-azure-portal-resourceid.PNG)
-
-If you've entered your subscription key in both places, the trial key that comes with your Review tool account will not be used but will remain available.
-
-## Next steps
-
-Follow the [Review tool quickstart](../quick-start.md) to start using the Review tool in content moderation scenarios.
cognitive-services Review Moderated Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/Review-Moderated-Images.md
- Title: Use content reviews through the Review tool - Content Moderator-
-description: Learn how the Review tool allows human moderators to review images in a web portal.
------- Previously updated : 03/15/2019-
-# Customer intent: use reviews with the Review tool.
--
-# Create human reviews
--
-In this guide, you'll learn how to set up [reviews](../review-api.md#reviews) on the Review tool website. Reviews store and display content for human moderators to assess. Moderators can alter the applied tags and apply their own custom tags as appropriate. When a user completes a review, the results are sent to a specified callback endpoint, and the content is removed from the site.
-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-
-## Image reviews
-
-1. Go to the [Review tool](https://contentmoderator.cognitive.microsoft.com/), select the **Try** tab, and upload some images to review.
-1. Once the uploaded images have finished processing, go to the **Review** tab and select **Image**.
-
- ![Chrome browser showing the review tool with the Review Image option highlighted](images/review-images-1.png)
-
- The images display with any labels that have been assigned by the automatic moderation process. The images you've submitted through the Review tool are not visible to other reviewers.
-
-1. Optionally, move the **Reviews to display** slider (1) to adjust the number of images that are displayed on the screen. Click on the **tagged** or **untagged** buttons (2) to sort the images accordingly. Click on a tag panel (3) to toggle it on or off.
-
- ![Chrome browser showing the Review tool with tagged images for review](images/review-images-2.png)
-
-1. To see more information on an image, click on the ellipsis in the thumbnail and select **View details**. You can assign an image to a subteam with the **Move to** option (see the [teams](./configure.md#manage-team-and-subteams) section to learn more about subteams).
-
- ![An image with the View details option highlighted](images/review-images-3.png)
-
-1. Browse the image moderation information on the details page.
-
- ![An image with moderation details listed in an separate pane](images/review-images-4.png)
-
-1. Once you have reviewed and updated the tag assignments as needed, click **Next** to submit your reviews. After you submit, you have about five seconds to click the **Prev** button to return to the previous screen and review images again. After that, the images are no longer in the Submit queue and the **Prev** button is no longer available.
-
-## Text reviews
-
-Text reviews function similarly to image reviews. Instead of uploading content, you simply write or paste in text (up to 1,024 characters). Then, Content Moderator analyzes the text and applies tags (in addition to other moderation information, such as profanity and personal data). In text reviews, you can toggle the applied tags and/or apply custom tags before submitting the review.
-
-![Screenshot of the review tool showing flagged text in a Chrome browser window](../images/reviewresults_text.png)
-
-## Next steps
-
-In this guide, you learned how to set up and use reviews from the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com). Next, see the [API console guide](../try-review-api-review.md) or the [.NET SDK quickstart](../client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) to learn how to create reviews programmatically.
cognitive-services Workflows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/Workflows.md
- Title: Define and use content workflows through the Review tool - Content Moderator-
-description: You can use the Azure Content Moderator workflow designer to define custom workflows and thresholds based on your content policies.
------- Previously updated : 04/04/2019-
-#Customer intent: use workflows from the Review tool
--
-# Define and use moderation workflows
--
-In this guide, you'll learn how to set up and use [workflows](../review-api.md#workflows) on the [Review tool](https://contentmoderator.cognitive.microsoft.com) website. Workflows are cloud-based customized filters that you can use to handle content more efficiently. Workflows can connect to a variety of services to filter content in different ways and then take the appropriate action. This guide shows you how to use the Content Moderator connector (which is included by default) to filter content and set up human reviews in a typical moderation scenario.
-
-## Create a new workflow
-
-Go to the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) and sign in. On the **Settings** tab, select **Workflows**.
-
-![Workflows setting](images/2-workflows-0.png)
-
-On the next screen, select **Add Workflow**.
-
-![Add a workflow](images/2-workflows-1.png)
-
-### Assign a name and description
-
-Name your workflow, enter a description, and choose whether the workflow will handle images or text.
-
-![Workflow name and description](images/image-workflow-create.PNG)
-
-### Define evaluation criteria
-
-On the next screen, go to the **If** section. In the top dropdown menu, choose **Condition**. This will allow you to configure the condition on which the workflow will take action. If you want to use multiple conditions, choose **Combination** instead.
-
-Next, select a connector. This example uses **Content Moderator**. Depending on the connector you choose, you will get different options for data output. See the [Connectors](./configure.md#connectors) section of the Review tool settings guide to learn how to set up other connectors.
-
-![Select workflow connector](images/image-workflow-connect-to.PNG)
-
-Choose the desired output to use and set the conditions to check it against.
-
-![Define workflow condition](images/image-workflow-condition.PNG)
-
-### Define the action
-
-Go to the **Then** section, where you select an action. The following example creates an image review and assigns a tag. Optionally, you can add an alternative (Else) path and set an action for that as well.
-
-![Define workflow action](images/image-workflow-action.PNG)
-
-### Save the workflow
-
-Note the workflow name; you need the name to start a moderation job with the Workflow API (see below). Finally, save the workflow using the **Save** button at the top of the page.
-
-## Test the workflow
-
-Now that you have defined a custom workflow, test it with sample content. Go to **Workflows** and select the corresponding **Execute Workflow** button.
-
-![Workflow test](images/image-workflow-execute.PNG)
-
-Save this [sample image](https://moderatorsampleimages.blob.core.windows.net/samples/sample2.jpg) to your local drive. Then select **Choose File(s)** and upload the image to the workflow.
-
-![A runner with a quote superimposed on the image](images/sample-text.jpg)
-
-### Track progress
-
-You can view the progress of the workflow in the next popup window.
-
-![Track workflow execution](images/image-workflow-job.PNG)
-
-### Verify workflow action
-
-Go to the **Image** tab under **Review** and verify that there is a newly created image review.
-
-![Review images](images/image-workflow-review.PNG)
-
-## Next steps
-
-In this guide, you learned how to set up and use moderation workflows from the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com). Next, see the [API console guide](../try-review-api-workflow.md) to learn how to create workflows programmatically.
cognitive-services Human In The Loop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/Review-Tool-User-Guide/human-in-the-loop.md
- Title: Learn Review tool concepts - Content Moderator-
-description: Learn about the Content Moderator Review tool, a website that coordinates a combined AI and human review moderation effort.
--- Previously updated : 10/28/2021----
-#Conceptual on the Review tool. Includes settings and dashboard
--
-# Content Moderator Review tool
--
-Azure Content Moderator provides services to combine machine learning content moderation with human reviews. The [Review tool](https://contentmoderator.cognitive.microsoft.com) website is a user-friendly front end that gives detailed access to these services.
-
-## What it does
-
-The [Review tool](https://contentmoderator.cognitive.microsoft.com), when used in conjunction with the machine-assisted moderation APIs, allows you to accomplish the following tasks in the content moderation process:
--- Use one set of tools to moderate content in multiple formats (text, image, and video).-- Automate the creation of human [reviews](../review-api.md#reviews) when moderation API results come in.-- Assign or escalate content reviews to multiple review teams, organized by content category or experience level.-- Use default or custom logic filters ([workflows](../review-api.md#workflows)) to sort and track content, without writing any code.-- Use [connectors](./configure.md#connectors) to process content with Microsoft PhotoDNA, Language, and Face services in addition to the Content Moderator APIs.-- Get key performance metrics on your content moderation processes.-
-## Review tool dashboard
-
-On the **Dashboard** tab, you can see key metrics for content reviews done within the tool. See the number of total, complete, and pending reviews for image, text, and video content.
-
-The **Pending reviews** table shows the breakdown of users and subteams that have pending or completed reviews, as well as the SLA time remaining. You can select the items in the table to go to their reviews. The search box above the table lets you filter results by team name, and the **Filter** icon lets you filter by other metrics.
-
-Switching to the **Completed reviews** tab shows the total number of items processed or completed by users and subteams. You can filter this data the same as the pending reviews.
-
-Clicking the text in the upper right corner of the dashboard displays the Daily Personal Metrics, which reports the number of reviews completed for each content type.
-
-> [!div class="mx-imgBorder"]
-> ![The review tool dashboard in a browser](images/0-dashboard.png)
-
-## Review tool credentials
-
-When you sign up with the [Review tool](https://contentmoderator.cognitive.microsoft.com), you'll be prompted to select an Azure region for you account. This is because the [Review tool](https://contentmoderator.cognitive.microsoft.com) generates a free trial key for Azure Content Moderator services. You'll need this key to access any of the services from a REST call or client SDK. You can view your key and API endpoint URL by selecting **Admin** > **Credentials**.
-
-> [!div class="mx-imgBorder"]
-> ![Content Moderator Credentials](images/settings-6-credentials.png)
-
-## Next steps
-
-See [Configure the Review tool](./configure.md) to learn how to access Review tool resources and change settings.
cognitive-services Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/api-reference.md
Title: API reference - Content Moderator
-description: Learn about the various content moderation and review APIs for Content Moderator.
+description: Learn about the content moderation APIs for Content Moderator.
# Content Moderator API reference
-You can get started with Azure Content Moderator APIs in the following ways:
+You can get started with Azure Content Moderator APIs by doing the following:
- In the Azure portal, [subscribe to the Content Moderator API](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesContentModerator).-- See [Try Content Moderator on the web](quick-start.md) to sign up with the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/).
-## Moderation APIs
-
-You can use the following Content Moderator APIs to set up your post-moderation workflows.
+You can use the following **Content Moderator APIs** to set up your post-moderation workflows.
| Description | Reference | | -- |-|
-| **Image Moderation API**<br /><br />Scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information. <br /><br />Use this information to publish, reject, or review the content in your post-moderation workflow. <br /><br />| [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c "Image Moderation API reference") |
-| **Text Moderation API**<br /><br />Scan text content. Profanity terms and personal data are returned. <br /><br />Use this information to publish, reject, or review the content in your post-moderation workflow.<br /><br /> | [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f "Text Moderation API reference") |
-| **Video Moderation API**<br /><br />Scan videos and detect potential adult and racy content. <br /><br />Use this information to publish, reject, or review the content in your post-moderation workflow.<br /><br /> | [Video Moderation API overview](video-moderation-api.md "Video Moderation API overview") |
+| **Image Moderation API**<br /><br />Scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information. | [Image Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c "Image Moderation API reference") |
+| **Text Moderation API**<br /><br />Scan text content. Profanity terms and personal data are returned. | [Text Moderation API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f "Text Moderation API reference") |
+| **Video Moderation API**<br /><br />Scan videos and detect potential adult and racy content. | [Video Moderation API overview](video-moderation-api.md "Video Moderation API overview") |
| **List Management API**<br /><br />Create and manage custom exclusion or inclusion lists of images and text. If enabled, the **Image - Match** and **Text - Screen** operations do fuzzy matching of the submitted content against your custom lists. <br /><br />For efficiency, you can skip the machine learning-based moderation step.<br /><br /> | [List Management API reference](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f675 "List Management API reference") |-
-## Review APIs
-
-The Review APIs have the following components:
-
-| Description | Reference |
-| -- |-|
-| **Jobs**<br /><br /> Initiate scan-and-review moderation workflows for both image and text content. A moderation job scans your content by using the Image Moderation API and the Text Moderation API. Moderation jobs use the defined and default workflows to generate reviews. <br /><br />After a human moderator has reviewed the auto-assigned tags and prediction data and submitted a content moderation decision, the Review API submits all information to your API endpoint.<br /><br /> | [Job reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c5 "Job reference") |
-| **Reviews**<br /><br />Use the Review tool to directly create image or text reviews for human moderators.<br /><br /> After a human moderator has reviewed the auto-assigned tags and prediction data and submitted a content moderation decision, the Review API submits all information to your API endpoint.<br /><br /> | [Review reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4 "Review reference") |
-| **Workflows**<br /><br />Create, update, and get details about the custom workflows that your team creates. You define workflows by using the Review tool. <br /> <br />Workflows typically use Content Moderator, but can also use certain other APIs that are available as connectors in the Review tool.<br /><br /> | [Workflow reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b46b3f9b0711b43c4c59 "Workflow reference") |
cognitive-services Ecommerce Retail Catalog Moderation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation.md
- Title: "Tutorial: Moderate e-commerce product images - Content Moderator"-
-description: This tutorial shows how to set up an application to analyze and classify product images with specified labels (using Azure Computer Vision and Custom Vision). Tag objectionable images to be further reviewed (using Azure Content Moderator).
------- Previously updated : 01/29/2021---
-#Customer intent: As a developer at an e-commerce company, I want to use machine learning to both categorize product images and tag objectionable images for further review by my team.
--
-# Tutorial: Moderate e-commerce product images with Azure Content Moderator
--
-In this tutorial, you'll learn how to use Azure Cognitive Services, including Content Moderator, to classify and moderate product images for an e-commerce scenario. You'll use Computer Vision and Custom Vision to apply tags (labels) to images, and then you'll create a team review, which combines Content Moderator's machine-learning-based technologies with human review teams to provide an intelligent moderation system.
-
-This tutorial shows you how to:
-
-> [!div class="checklist"]
-> * Sign up for Content Moderator and create a review team.
-> * Use Content Moderator's image API to scan for potential adult and racy content.
-> * Use the Computer Vision service to scan for celebrity content (or other Computer-Vision-detectable tags).
-> * Use the Custom Vision service to scan for the presence of flags, toys, and pens (or other custom tags).
-> * Present the combined scan results for human review and final decision making.
-
-The complete sample code is available in the [Samples eCommerce Catalog Moderation](https://github.com/MicrosoftContentModerator/samples-eCommerceCatalogModeration) repository on GitHub.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-## Prerequisites
--- A Content Moderator subscription key. Follow the instructions in [Create a Cognitive Services account](../cognitive-services-apis-create-account.md) to subscribe to the Content Moderator service and get your key.-- A Computer Vision subscription key (same instructions as above).-- Any edition of [Visual Studio 2015 or 2017](https://www.visualstudio.com/downloads/).-- A set of images for each label that the Custom Vision classifier will use (in this case toys, pens, and US flags).-
-## Create a review team
-
-Refer to the [Try Content Moderator on the web](quick-start.md) quickstart for instructions on how to sign up for the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) and create a review team. Take note of the **Team ID** value on the **Credentials** page.
-
-## Create custom moderation tags
-
-Next, create custom tags in the Review tool (see the [Tags](./review-tool-user-guide/configure.md#tags) article if you need help with this process). In this case, we will add the following tags: **celebrity**, **USA**, **flag**, **toy**, and **pen**. Not all of the tags need to be detectable categories in Computer Vision (like **celebrity**); you can add your own custom tags as long as you train the Custom Vision classifier to detect them later on.
-
-![Configure custom tags](images/tutorial-ecommerce-tags2.PNG)
-
-## Create Visual Studio project
-
-1. In Visual Studio, open the New Project dialog. Expand **Installed**, then **Visual C#**, then select **Console app (.NET Framework)**.
-1. Name the application **EcommerceModeration**, then select **OK**.
-1. If you're adding this project to an existing solution, select this project as the single startup project.
-
-This tutorial highlights the code that is central to the project, but it won't cover every line of code. Copy the full contents of _Program.cs_ from the sample project ([Samples eCommerce Catalog Moderation](https://github.com/MicrosoftContentModerator/samples-eCommerceCatalogModeration)) into the _Program.cs_ file of your new project. Then, step through the following sections to learn about how the project works and how to use it yourself.
-
-## Define API keys and endpoints
-
-This tutorial uses three cognitive services; therefore, it requires three corresponding keys and API endpoints. See the following fields in the **Program** class:
-
-[!code-csharp[define API keys and endpoint URIs](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=21-29)]
-
-You'll need to update the `___Key` fields with the values of your subscription keys, and you need to change the `___Uri` fields to the correct endpoint URLs (you'll get the Custom Vision key and endpoint later on). You can find these values in the **Quick start** tabs of each Azure resource. Fill in the `YOURTEAMID` part of the `ReviewUri` field with the ID of the review team you created earlier. You'll fill in the final part of the `CustomVisionUri` field later on.
--
-## Primary method calls
-
-See the following code in the **Main** method, which loops through a list of image URLs. It analyzes each image with the three different services, records the applied tags in the **ReviewTags** array, and then creates a review for human moderators by sending the images to the Content Moderator Review Tool. You will explore these methods in the following sections. If you wish, you can control which images are sent to review, using the **ReviewTags** array in a conditional statement to check which tags were applied.
-
-[!code-csharp[Main: evaluate each image and create review](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=53-70)]
-
-## EvaluateAdultRacy method
-
-See the **EvaluateAdultRacy** method in the **Program** class. This method takes an image URL and an array of key-value pairs as parameters. It calls the Content Moderator Image REST API to get the Adult and Racy scores of the image. If the score for either is greater than 0.4 (the range is between 0 and 1), it sets the corresponding value in the **ReviewTags** array to **True**.
-
-[!code-csharp[define EvaluateAdultRacy method](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=73-113)]
-
-## EvaluateComputerVisionTags method
-
-The next method takes an image URL and your Computer Vision subscription information and analyzes the image for the presence of celebrities. If one or more celebrities are found, it sets the corresponding value in the **ReviewTags** array to **True**.
-
-[!code-csharp[define EvaluateCustomVisionTags method](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=115-146)]
-
-## EvaluateCustomVisionTags method
-
-Next, see the **EvaluateCustomVisionTags** method, which classifies the actual products&mdash;in this case flags, toys, and pens. Follow the instructions in the [How to build a classifier](../custom-vision-service/getting-started-build-a-classifier.md) guide to build your own custom image classifier and detect flags, toys, and pens (or whatever you chose as your custom tags) in images. You can use the images in the **sample-images** folder of the [GitHub repo](https://github.com/MicrosoftContentModerator/samples-eCommerceCatalogModeration) to quickly train some of the categories in this example.
-
-![Custom Vision web page with training images of pens, toys, and flags](images/tutorial-ecommerce-custom-vision.PNG)
-
-Once you've trained your classifier, get the prediction key and prediction endpoint URL (see [Get the URL and prediction key](../custom-vision-service/use-prediction-api.md#get-the-url-and-prediction-key) if you need help with retrieving them), and assign these values to your `CustomVisionKey` and `CustomVisionUri` fields, respectively. The method uses these values to query the classifier. If the classifier finds one or more of the custom tags in the image, this method sets the corresponding value(s) in the **ReviewTags** array to **True**.
-
-[!code-csharp[define EvaluateCustomVisionTags method](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=148-171)]
-
-## Create reviews for Review tool
-
-In the previous sections, you explored how the app scans incoming images for adult and racy content (Content Moderator), celebrities (Computer Vision), and various other objects (Custom Vision). Next, see the **CreateReview** method, which uploads the images with all of their applied tags (passed in as _Metadata_) to the Content Moderator Review Tool.
-
-[!code-csharp[define CreateReview method](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=173-196)]
-
-The images will show up in the Review tab of the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/).
-
-![Screenshot of the Content Moderator Review tool with several images and their highlighted tags](images/tutorial-ecommerce-content-moderator.PNG)
-
-## Submit a list of test images
-
-As you can see in the **Main** method, this program looks for a "C:Test" directory with a _Urls.txt_ file that contains a list of image Urls. Create this file and directory, or change the path to point to your text file. Then populate this file with the URLs of images you'd like to test.
-
-[!code-csharp[Main: set up test directory, read lines](~/samples-eCommerceCatalogModeration/Fusion/Program.cs?range=38-51)]
-
-## Run the program
-
-If you've followed all of the above steps, the program should process each image (querying all three services for their relevant tags) and then upload the images with tag information to the Content Moderator Review Tool.
-
-## Next steps
-
-In this tutorial, you set up a program to analyze product images, tag them by product type, and allow a review team to make informed decisions about content moderation. Next, learn more about the details of image moderation.
-
-> [!div class="nextstepaction"]
-> [Review moderated images](./review-tool-user-guide/review-moderated-images.md)
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/encrypt-data-at-rest.md
Customer-managed keys are available in all Azure regions.
[!INCLUDE [cognitive-services-cmk](../includes/configure-customer-managed-keys.md)]
-## Enable data encryption for your Content Moderator Team
-
-To enable data encryption for your Content Moderator Review Team, see the [Quickstart: Try Content Moderator on the web](quick-start.md#create-a-review-team).
-
-> [!NOTE]
-> You'll need to provide a _Resource ID_ with the Content Moderator E0 pricing tier.
- ## Next steps * For a full list of services that support CMK, see [Customer-Managed Keys for Cognitive Services](../encryption/cognitive-services-encryption-keys-portal.md)
cognitive-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/export-delete-data.md
[!INCLUDE [deprecation notice](includes/tool-deprecation.md)]
-Content Moderator collects user data to operate the service, but customers have full control to view, export, and delete their data using the [Review tool](https://contentmoderator.cognitive.microsoft.com/) and the [Moderation and Review APIs](./api-reference.md).
+Content Moderator collects user data to operate the service, but customers have full control to view, export, and delete their data using the [Moderation APIs](./api-reference.md).
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-intro-sentence.md)]
For more information on how to export and delete user data in Content Moderator,
| Data | Export Operation | Delete Operation | | - | - | - |
-| Account Info (Subscription Keys) | N/A | Delete using the Azure portal (Azure Subscriptions). Or use the **Delete Team** button in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page. |
+| Account Info (Subscription Keys) | N/A | Delete using the Azure portal (Azure Subscriptions). |
| Images for custom matching | Call the [Get image IDs API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f676). Images are stored in a one-way proprietary hash format, and there is no way to extract the actual images. | Call the [Delete all Images API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f686). Or delete the Content Moderator resource using the Azure portal. |
-| Terms for custom matching | Cal the [Get all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67e) | Call the [Delete all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67d). Or delete the Content Moderator resource using the Azure portal. |
-| Tags | N/A | Use the **Delete** icon available for each tag in the Review UI Tag settings page. Or use the **Delete Team** button in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page. |
-| Reviews | Call the [Get review API](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c2) | Use the **Delete Team** button in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page.
-| Users | N/A | Use the **Delete** icon available for each user in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page. Or use the **Delete Team** button in the [Review UI](https://contentmoderator.cognitive.microsoft.com/) Team settings page. |
+| Terms for custom matching | Cal the [Get all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67e) | Call the [Delete all terms API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf755e3f9b070c105bd2c2/operations/57cf755e3f9b070868a1f67d). Or delete the Content Moderator resource using the Azure portal. |
cognitive-services Facebook Post Moderation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/facebook-post-moderation.md
- Title: "Tutorial: Moderate Facebook content - Content Moderator"-
-description: In this tutorial, you will learn how to use machine-learning-based Content Moderator to help moderate Facebook posts and comments.
------- Previously updated : 01/29/2021-
-#Customer intent: As the moderator of a Facebook page, I want to use Azure's machine learning technology to automate and streamline the process of post moderation.
--
-# Tutorial: Moderate Facebook posts and commands with Azure Content Moderator
--
-In this tutorial, you will learn how to use Azure Content Moderator to help moderate the posts and comments on a Facebook page. Facebook will send the content posted by visitors to the Content Moderator service. Then your Content Moderator workflows will either publish the content or create reviews within the Review tool, depending on the content scores and thresholds. See the [Build 2017 demo video](https://channel9.msdn.com/Events/Build/2017/T6033) for a working example of this scenario.
-
-> [!IMPORTANT]
-> In 2018, Facebook implemented a more strict vetting policy for Facebook Apps. You will not be able to complete the steps of this tutorial if your app has not been reviewed and approved by the Facebook review team.
-
-This tutorial shows you how to:
-
-> [!div class="checklist"]
-> * Create a Content Moderator team.
-> * Create Azure Functions that listen for HTTP events from Content Moderator and Facebook.
-> * Link a Facebook page to Content Moderator using a Facebook application.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-
-This diagram illustrates each component of this scenario:
-
-![Diagram of Content Moderator receiving information from Facebook through "FBListener" and sending information through "CMListener"](images/tutorial-facebook-moderation.png)
-
-## Prerequisites
--- A Content Moderator subscription key. Follow the instructions in [Create a Cognitive Services account](../cognitive-services-apis-create-account.md) to subscribe to the Content Moderator service and get your key.-- A [Facebook account](https://www.facebook.com/).-
-## Create a review team
-
-Refer to the [Try Content Moderator on the web](quick-start.md) quickstart for instructions on how to sign up for the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) and create a review team. Take note of the **Team ID** value on the **Credentials** page.
-
-## Configure image moderation workflow
-
-Refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide to create a custom image workflow. Content Moderator will use this workflow to automatically check images on Facebook and send some to the Review tool. Take note of the workflow **name**.
-
-## Configure text moderation workflow
-
-Again, refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide; this time, create a custom text workflow. Content Moderator will use this workflow to automatically check text content. Take note of the workflow **name**.
-
-![Configure Text Workflow](images/text-workflow-configure.PNG)
-
-Test your workflow using the **Execute Workflow** button.
-
-![Test Text Workflow](images/text-workflow-test.PNG)
-
-## Create Azure Functions
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps:
-
-1. Create an Azure Function App as shown on the [Azure Functions](../../azure-functions/functions-create-function-app-portal.md) page.
-1. Go to the newly created Function App.
-1. Within the App, go to the **Platform features** tab and select **Configuration**. In the **Application settings** section of the next page, select **New application setting** to add the following key/value pairs:
-
- | App Setting name | value |
- | -- |-|
- | `cm:TeamId` | Your Content Moderator TeamId |
- | `cm:SubscriptionKey` | Your Content Moderator subscription key - See [Credentials](./review-tool-user-guide/configure.md#credentials) |
- | `cm:Region` | Your Content Moderator region name, without the spaces. You can find this name in the **Location** field of the **Overview** tab of your Azure resource.|
- | `cm:ImageWorkflow` | Name of the workflow to run on Images |
- | `cm:TextWorkflow` | Name of the workflow to run on Text |
- | `cm:CallbackEndpoint` | Url for the CMListener Function App that you will create later in this guide |
- | `fb:VerificationToken` | A secret token that you create, used to subscribe to the Facebook feed events |
- | `fb:PageAccessToken` | The Facebook graph api access token does not expire and allows the function Hide/Delete posts on your behalf. You will get this token at a later step. |
-
- Click the **Save** button at the top of the page.
-
-1. Go back to the **Platform features** tab. Use the **+** button on the left pane to bring up the **New function** pane. The function you are about to create will receive events from Facebook.
-
- ![Azure Functions pane with the Add Function button highlighted.](images/new-function.png)
-
- 1. Click on the tile that says **Http trigger**.
- 1. Enter the name **FBListener**. The **Authorization Level** field should be set to **Function**.
- 1. Click **Create**.
- 1. Replace the contents of the **run.csx** with the contents from **FbListener/run.csx**
-
- [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/FbListener/run.csx?range=1-154)]
-
-1. Create a new **Http trigger** function named **CMListener**. This function receives events from Content Moderator. Replace the contents of the **run.csx** with the contents from **CMListener/run.csx**
-
- [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/CmListener/run.csx?range=1-110)]
---
-## Configure the Facebook page and App
-
-1. Create a Facebook App.
-
- ![facebook developer page](images/facebook-developer-app.png)
-
- 1. Navigate to the [Facebook developer site](https://developers.facebook.com/)
- 1. Go to **My Apps**.
- 1. Add a New App.
- 1. Provide a name
- 1. Select **Webhooks -> Set Up**
- 1. Select **Page** in the dropdown menu and select **Subscribe to this object**
- 1. Provide the **FBListener Url** as the Callback URL and the **Verify Token** you configured under the **Function App Settings**
- 1. Once subscribed, scroll down to feed and select **subscribe**.
- 1. Select the **Test** button of the **feed** row to send a test message to your FBListener Azure Function, then hit the **Send to My Server** button. You should see the request being received on your FBListener.
-
-1. Create a Facebook Page.
-
- > [!IMPORTANT]
- > In 2018, Facebook implemented a more strict vetting of Facebook apps. You will not be able to execute sections 2, 3 and 4 if your app has not been reviewed and approved by the Facebook review team.
-
- 1. Navigate to [Facebook](https://www.facebook.com/pages) and create a **new Facebook Page**.
- 1. Allow the Facebook App to access this page by following these steps:
- 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
- 1. Select **Application**.
- 1. Select **Page Access Token**, Send a **Get** request.
- 1. Select the **Page ID** in the response.
- 1. Now append the **/subscribed_apps** to the URL and Send a **Get** (empty response) request.
- 1. Submit a **Post** request. You get the response as **success: true**.
-
-3. Create a non-expiring Graph API access token.
-
- 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
- 2. Select the **Application** option.
- 3. Select the **Get User Access Token** option.
- 4. Under the **Select Permissions**, select **manage_pages** and **publish_pages** options.
- 5. We will use the **access token** (Short Lived Token) in the next step.
-
-4. We use Postman for the next few steps.
-
- 1. Open **Postman** (or get it [here](https://www.getpostman.com/)).
- 2. Import these two files:
- 1. [Postman Collection](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/Facebook%20Permanant%20Page%20Access%20Token.postman_collection.json)
- 2. [Postman Environment](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/FB%20Page%20Access%20Token%20Environment.postman_environment.json)
- 3. Update these environment variables:
-
- | Key | Value |
- | -- |-|
- | appId | Insert your Facebook App Identifier here |
- | appSecret | Insert your Facebook App's secret here |
- | short_lived_token | Insert the short lived user access token you generated in the previous step |
- 4. Now run the 3 APIs listed in the collection:
- 1. Select **Generate Long-Lived Access Token** and click **Send**.
- 2. Select **Get User ID** and click **Send**.
- 3. Select **Get Permanent Page Access Token** and click **Send**.
- 5. Copy the **access_token** value from the response and assign it to the App setting, **fb:PageAccessToken**.
-
-The solution sends all images and text posted on your Facebook page to Content Moderator. Then the workflows that you configured earlier are invoked. The content that does not pass your criteria defined in the workflows gets passed to reviews within the review tool. The rest of the content gets published automatically.
-
-## Next steps
-
-In this tutorial, you set up a program to analyze product images, tag them by product type, and allow a review team to make informed decisions about content moderation. Next, learn more about the details of image moderation.
-
-> [!div class="nextstepaction"]
-> [Image moderation](./image-moderation-api.md)
cognitive-services Image Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/image-moderation-api.md
Title: Image Moderation - Content Moderator
-description: Use Content Moderator's machine-assisted image moderation and human-in-the-loop Review tool to moderate images for adult and racy content.
+description: Use Content Moderator's machine-assisted image moderation to moderate images for adult and racy content.
# Learn image moderation concepts
-Use Content Moderator's machine-assisted image moderation and [Review tool](Review-Tool-User-Guide/human-in-the-loop.md) to moderate images for adult and racy content. Scan images for text content and extract that text, and detect faces. You can match images against custom lists, and take further action.
+Use Content Moderator's machine-assisted image moderation to moderate images for adult and racy content. Scan images for text content and extract that text, and detect faces. You can match images against custom lists, and take further action.
## Evaluating for adult and racy content
Example extract:
} ```
-## Review tool
-
-For more nuanced cases, use the Content Moderator [Review tool](Review-Tool-User-Guide/human-in-the-loop.md) and its API to surface the moderation results and content in the review for your human moderators. They review the machine-assigned tags and confirm their final decisions.
-
-![Image review for human moderators](images/moderation-reviews-quickstart-dotnet.PNG)
## Next steps
-Test drive the [Image Moderation API console](try-image-api.md) and use the REST API code samples. Also see [Reviews, workflows, and jobs](./review-api.md) to learn how to set up human reviews.
+Test drive the [Image Moderation API console](try-image-api.md) and use the REST API code samples.
cognitive-services Moderation Jobs Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/moderation-jobs-quickstart-dotnet.md
- Title: "Use moderation jobs using .NET - Content Moderator"-
-description: Use the Content Moderator .NET SDK to initiate end-to-end content moderation jobs for image or text content in Azure Content Moderator.
------- Previously updated : 10/24/2019--
-#The jobs how-to for .NET SDK
--
-# Define and use moderation jobs (.NET)
--
-A moderation job serves as a kind of wrapper for the functionality of content moderation, workflows and reviews. This guide provides information and code samples to help you get started using the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to:
--- Start a moderation job to scan and create reviews for human moderators-- Get the status of the pending review-- Track and get the final status of the review-- Submit the review results to the callback URL-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-
-## Ensure your API key can call the review API for review creation
-
-After completing the previous steps, you may end up with two Content Moderator keys if you started from the Azure portal.
-
-If you plan to use the Azure-provided API key in your SDK sample, follow the steps mentioned in the [Using Azure key with the review API](./review-tool-user-guide/configure.md#use-your-azure-account-with-the-review-apis) section to allow your application to call the review API and create reviews.
-
-If you use the free trial key generated by the review tool, your review tool account already knows about the key and therefore, no additional steps are required.
-
-## Define a custom moderation workflow
-
-A moderation job scans your content using the APIs and uses a **workflow** to determine whether to create reviews or not.
-While the review tool contains a default workflow, let's [define a custom workflow](Review-Tool-User-Guide/Workflows.md) for this quickstart.
-
-You use the name of the workflow in your code that starts the moderation job.
-
-## Create your Visual Studio project
-
-1. Add a new **Console app (.NET Framework)** project to your solution.
-
- In the sample code, name the project **CreateReviews**.
-
-1. Select this project as the single startup project for the solution.
-
-### Install required packages
-
-Install the following NuGet packages:
--- Microsoft.Azure.CognitiveServices.ContentModerator-- Microsoft.Rest.ClientRuntime-- Newtonsoft.Json-
-### Update the program's using statements
-
-Modify the program's using statements.
-
-```csharp
-using Microsoft.Azure.CognitiveServices.ContentModerator;
-using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
-using Newtonsoft.Json;
-using System;
-using System.Collections.Generic;
-using System.IO;
-using System.Threading;
-```
-
-### Create the Content Moderator client
-
-Add the following code to create a Content Moderator client for your subscription.
-
-> [!IMPORTANT]
-> Update the **AzureEndpoint** and **CMSubscriptionKey** fields with
-> the values of your endpoint URL and subscription key.
-
-```csharp
-/// <summary>
-/// Wraps the creation and configuration of a Content Moderator client.
-/// </summary>
-/// <remarks>This class library contains insecure code. If you adapt this
-/// code for use in production, use a secure method of storing and using
-/// your Content Moderator subscription key.</remarks>
-public static class Clients
-{
- /// <summary>
- /// The base URL fragment for Content Moderator calls.
- /// </summary>
- private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
-
- /// <summary>
- /// Your Content Moderator subscription key.
- /// </summary>
- private static readonly string CMSubscriptionKey = "YOUR API KEY";
-
- /// <summary>
- /// Returns a new Content Moderator client for your subscription.
- /// </summary>
- /// <returns>The new client.</returns>
- /// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
- /// When you have finished using the client,
- /// you should dispose of it either directly or indirectly. </remarks>
- public static ContentModeratorClient NewClient()
- {
- // Create and initialize an instance of the Content Moderator API wrapper.
- ContentModeratorClient client = new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey));
-
- client.Endpoint = AzureEndpoint;
- return client;
- }
-}
-```
-
-### Initialize application-specific settings
-
-Add the following constants and static fields to the **Program** class in Program.cs.
-
-> [!NOTE]
-> You set the TeamName constant to the name you used when you
-> created your Content Moderator subscription. You retrieve TeamName from the
-> Content Moderator web site.
-> Once you log in, select **Credentials** from the **Settings** (gear) menu.
->
-> Your team name is the value of the **Id** field in the **API** section.
-
-```csharp
-/// <summary>
-/// The moderation job will use this workflow that you defined earlier.
-/// See the quickstart article to learn how to setup custom workflows.
-/// </summary>
-private const string WorkflowName = "OCR";
-
-/// <summary>
-/// The name of the team to assign the job to.
-/// </summary>
-/// <remarks>This must be the team name you used to create your
-/// Content Moderator account. You can retrieve your team name from
-/// the Content Moderator web site. Your team name is the Id associated
-/// with your subscription.</remarks>
-private const string TeamName = "***";
-
-/// <summary>
-/// The URL of the image to create a review job for.
-/// </summary>
-private const string ImageUrl =
- "https://moderatorsampleimages.blob.core.windows.net/samples/sample5.png";
-
-/// <summary>
-/// The name of the log file to create.
-/// </summary>
-/// <remarks>Relative paths are relative to the execution directory.</remarks>
-private const string OutputFile = "OutputLog.txt";
-
-/// <summary>
-/// The number of seconds to delay after a review has finished before
-/// getting the review results from the server.
-/// </summary>
-private const int latencyDelay = 45;
-
-/// <summary>
-/// The callback endpoint for completed reviews.
-/// </summary>
-/// <remarks>Reviews show up for reviewers on your team.
-/// As reviewers complete reviews, results are sent to the
-/// callback endpoint using an HTTP POST request.</remarks>
-private const string CallbackEndpoint = "";
-```
-
-## Add code to auto-moderate, create a review, and get the job details
-
-> [!Note]
-> In practice, you set the callback URL **CallbackEndpoint** to the URL
-> that receives the results of the manual review (via an HTTP POST request).
-
-Start by adding the following code to the **Main** method.
-
-```csharp
-using (TextWriter writer = new StreamWriter(OutputFile, false))
-{
- using (var client = Clients.NewClient())
- {
- writer.WriteLine("Create review job for an image.");
- var content = new Content(ImageUrl);
-
- // The WorkflowName contains the name of the workflow defined in the online review tool.
- // See the quickstart article to learn more.
- var jobResult = client.Reviews.CreateJobWithHttpMessagesAsync(
- TeamName, "image", "contentID", WorkflowName, "application/json", content, CallbackEndpoint);
-
- // Record the job ID.
- var jobId = jobResult.Result.Body.JobIdProperty;
-
- // Log just the response body from the returned task.
- writer.WriteLine(JsonConvert.SerializeObject(
- jobResult.Result.Body, Formatting.Indented));
-
- Thread.Sleep(2000);
- writer.WriteLine();
-
- writer.WriteLine("Get review job status.");
- var jobDetails = client.Reviews.GetJobDetailsWithHttpMessagesAsync(
- TeamName, jobId);
-
- // Log just the response body from the returned task.
- writer.WriteLine(JsonConvert.SerializeObject(
- jobDetails.Result.Body, Formatting.Indented));
-
- Console.WriteLine();
- Console.WriteLine("Perform manual reviews on the Content Moderator site.");
- Console.WriteLine("Then, press any key to continue.");
- Console.ReadKey();
-
- Console.WriteLine();
- Console.WriteLine($"Waiting {latencyDelay} seconds for results to propagate.");
- Thread.Sleep(latencyDelay * 1000);
-
- writer.WriteLine("Get review details.");
- jobDetails = client.Reviews.GetJobDetailsWithHttpMessagesAsync(
- TeamName, jobId);
-
- // Log just the response body from the returned task.
- writer.WriteLine(JsonConvert.SerializeObject(
- jobDetails.Result.Body, Formatting.Indented));
- }
- writer.Flush();
- writer.Close();
-}
-```
-
-> [!NOTE]
-> Your Content Moderator service key has a requests per second (RPS)
-> rate limit. If you exceed the limit, the SDK throws an exception with a 429 error code.
->
-> A free tier key has a one RPS rate limit.
-
-## Run the program and review the output
-
-You see the following sample output in the console:
-
-```console
-Perform manual reviews on the Content Moderator site.
-Then, press any key to continue.
-```
-
-Sign into the Content Moderator review tool to see the pending image review.
-
-Use the **Next** button to submit.
-
-![Image review for human moderators](images/ocr-sample-image.PNG)
-
-## See the sample output in the log file
-
-> [!NOTE]
-> In your output file, the strings **Teamname**, **ContentId**, **CallBackEndpoint**, and **WorkflowId**
-> reflect the values you used earlier.
-
-```json
-Create moderation job for an image.
-{
- "JobId": "2018014caceddebfe9446fab29056fd8d31ffe"
-}
-
-Get review details.
-{
- "Id": "2018014caceddebfe9446fab29056fd8d31ffe",
- "TeamName": "some team name",
- "Status": "InProgress",
- "WorkflowId": "OCR",
- "Type": "Image",
- "CallBackEndpoint": "",
- "ReviewId": "",
- "ResultMetaData": [],
- "JobExecutionReport": [
- {
- "Ts": "2018-01-07T00:38:26.7714671",
- "Msg": "Successfully got hasText response from Moderator"
- },
- {
- "Ts": "2018-01-07T00:38:26.4181346",
- "Msg": "Getting hasText from Moderator"
- },
- {
- "Ts": "2018-01-07T00:38:25.5122828",
- "Msg": "Starting Execution - Try 1"
- }
- ]
-}
-```
-
-## Your callback Url if provided, receives this response
-
-You see a response like the following example:
-
-> [!NOTE]
-> In your callback response, the strings **ContentId** and **WorkflowId**
-> reflect the values you used earlier.
-
-```json
-{
- "JobId": "2018014caceddebfe9446fab29056fd8d31ffe",
- "ReviewId": "201801i28fc0f7cbf424447846e509af853ea54",
- "WorkFlowId": "OCR",
- "Status": "Complete",
- "ContentType": "Image",
- "CallBackType": "Job",
- "ContentId": "contentID",
- "Metadata": {
- "hastext": "True",
- "ocrtext": "IF WE DID \r\nALL \r\nTHE THINGS \r\nWE ARE \r\nCAPABLE \r\nOF DOING, \r\nWE WOULD \r\nLITERALLY \r\nASTOUND \r\nOURSELVE \r\n",
- "imagename": "contentID"
- }
-}
-```
-
-## Next steps
-
-Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET, and get started on your integration.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/overview.md
keywords: content moderator, azure content moderator, online moderator, content
[!INCLUDE [deprecation notice](includes/tool-deprecation.md)]
-Azure Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. It includes the AI-powered content moderation service which scans text, image, and videos and applies content flags automatically, as well as the Review tool, an online moderator environment for a team of human reviewers.
+Azure Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. It includes the AI-powered content moderation service which scans text, image, and videos and applies content flags automatically.
You may want to build content filtering software into your app to comply with regulations or maintain the intended environment for your users.
The following are a few scenarios in which a software developer or team would re
## What it includes
-The Content Moderator service consists of several web service APIs available through both REST calls and a .NET SDK. It also includes the Review tool, which allows human reviewers to aid the service and improve or fine-tune its moderation function.
+The Content Moderator service consists of several web service APIs available through both REST calls and a .NET SDK.
## Moderation APIs
The following table describes the different types of moderation APIs.
|[**Custom image lists**](try-image-list-api.md)| Scans images against a custom list of images. Use custom image lists to filter out instances of commonly recurring content that you don't want to classify again.| |[**Video moderation**](video-moderation-api.md)| Scans videos for adult or racy content and returns time markers for said content.|
-## Review APIs
-
-The Review APIs let you integrate your moderation pipeline with human reviewers. Use the [Jobs](review-api.md#jobs), [Reviews](review-api.md#reviews), and [Workflow](review-api.md#workflows) operations to create and automate human-in-the-loop workflows with the [Review tool](#review-tool) (below).
-
-> [!NOTE]
-> The Workflow API is not yet available in the .NET SDK but can be used with the REST endpoint.
-
-![block diagram for Content Moderator review APIs](images/content-moderator-rev-api.png)
-
-## Review tool
-
-The Content Moderator service also includes the web-based [Review tool](Review-Tool-User-Guide/human-in-the-loop.md), which hosts the content reviews for human moderators to process. The human input doesn't train the service, but the combined work of the service and human review teams allows developers to strike the right balance between efficiency and accuracy. The Review tool also provides a user-friendly front end for several Content Moderator resources.
-
-![Content Moderator Review tool homepage](images/homepage.PNG)
- ## Data privacy and security As with all of the Cognitive Services, developers using the Content Moderator service should be aware of Microsoft's policies on customer data. See the [Cognitive Services page](https://www.microsoft.com/trustcenter/cloudservices/cognitiveservices) on the Microsoft Trust Center to learn more.
cognitive-services Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/quick-start.md
- Title: "Quickstart: Try Content Moderator on the web"-
-description: In this quickstart, you'll use the online Content Moderator Review tool to test out the basic functionality of Content Moderator without having to write any code.
------ Previously updated : 09/28/2021--
-keywords: content moderator, content moderation
--
-# Quickstart: Try Content Moderator on the web
--
-In this quickstart, you'll use the online Content Moderator Review tool to test out the basic functionality of Content Moderator without having to write any code. If you wish to integrate this service into your content moderation app more quickly, see the other quickstarts in the [Next steps](#next-steps) section.
-
-## Prerequisites
--- A web browser-
-## Set up the review tool
-The Content Moderator Review tool is a web-based tool that allows human reviewers to aid the cognitive service in making decisions. In this guide, you'll go through the short process of setting up the review tool so that you can see how the Content Moderator service works. Go to the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) site and sign up.
-
-![Content Moderator Home Page](images/homepage.PNG)
-
-## Create a review team
-
-Next, create a review team. In a working scenario, this team will be the group of people who manually review the service's moderation decisions. To create a team, you'll need to select a **Region**, and provide a **Team Name** and a **Team ID**. If you wish to invite colleagues to the team, you can do so by entering their email addresses here.
-
-> [!NOTE]
-> **Team Name** is a friendly name for your review team. This is the name displayed in the Azure portal. The **Team ID** is what's used to identify your review team programatically.
-
-> [!div class="mx-imgBorder"]
-> ![Invite team member](images/create-team.png)
-
-If you choose to encrypt data using a customer-managed key (CMK), you'll be prompted for the **Resource ID** for your Content Moderator resource in the E0 pricing tier. The resource you provide must be unique to this team.
-
-> [!div class="mx-imgBorder"]
-> ![Invite team member with CMK](images/create-team-cmk.png)
-
-## Upload sample content
-
-Now you're ready to upload sample content. Select **Try > Image**, **Try > Text**, or **Try > Video**.
-
-> [!div class="mx-imgBorder"]
-> ![Try Image or Text Moderation](images/tryimagesortext.png)
-
-Submit your content for moderation. You can use the following sample text content:
-
-```
-Is this a grabage email abcdef@abcd.com, phone: 4255550111, IP: 255.255.255.255, 1234 Main Boulevard, Panapolis WA 96555.
-<offensive word> is the profanity here. Is this information PII? phone 4255550111
-```
-
-Internally, the review tool will call the moderation APIs to scan your content. Once the scanning is complete, you'll see a message informing you that there are results waiting for your review.
-
-> [!div class="mx-imgBorder"]
-> ![Moderate files](images/submitted.png)
-
-## Review moderation tags
-
-Review the applied moderation tags. You can see which tags were applied to your content and what the score was in each category. See the [Image](image-moderation-api.md), [Text](text-moderation-api.md), and [Video](video-moderation-api.md) moderation articles to learn more about what the different content tags indicate.
-
-<!-- ![Review results](images/reviewresults_text.png) -->
-
-In a project, you or your review team can change these tags or add more tags as needed. You'll submit these changes with the **Next** button. As your business application calls the Moderator APIs, the tagged content will queue up here, ready to be reviewed by the human review teams. You can quickly review large volumes of content using this approach.
-
-At this point, you've used the Content Moderator Review tool to see examples of what the Content Moderator service can do. Next, you can either learn more about the review tool and how to integrate it into a software project using the Review APIs, or you can skip to the [Next steps](#next-steps) section to learn how to use the Moderation APIs themselves in your app.
-
-## Learn more about the review tool
-
-To learn more about how to use the Content Moderator Review tool, take a look at the [Review tool](Review-Tool-User-Guide/human-in-the-loop.md) guide, and see the Review tool APIs to learn how to fine-tune the human review experience:
-- The [Job API](try-review-api-job.md) scans your content by using the moderation APIs and generates reviews in the review tool. -- The [Review API](try-review-api-review.md) directly creates image, text, or video reviews for human moderators without first scanning the content. -- The [Workflow API](try-review-api-workflow.md) creates, updates, and gets details about the custom workflows that your team creates.-
-Or, continue with the next steps to get started using the Moderation APIs in your code.
-
-## Next steps
-
-Learn how to use the Moderation APIs themselves in your app.
-- Implement image moderation. Use the [API console](try-image-api.md) or follow a [quickstart](client-libraries.md) to scan images and detect potential adult and racy content by using tags, confidence scores, and other extracted information.-- Implement text moderation. Use the [API console](try-text-api.md) or follow a [quickstart](client-libraries.md) to scan text content for potential profanity, personal data, and other unwanted text.-- Implement video moderation. Follow the [Video moderation how-to guide for C#](video-moderation-api.md) to scan videos and detect potential adult and racy content.
cognitive-services Review Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/review-api.md
- Title: Reviews, Workflows, and Jobs concepts - Content Moderator-
-description: In this article, you will learn about the core concepts of the Review tool; reviews, workflows, and jobs.
------- Previously updated : 03/14/2019-
-#Customer intent: broad conceptual overview of key concepts
--
-# Content moderation reviews, workflows, and jobs
--
-Content Moderator combines machine-assisted moderation with human-in-the-loop capabilities to create an optimal moderation process for real-world scenarios. It does this through the cloud-based [Review tool](https://contentmoderator.cognitive.microsoft.com). In this guide, you'll learn about the core concepts of the Review tool: reviews, workflows, and jobs.
-
-## Reviews
-
-In a review, content is uploaded to the Review tool. You can view it by clicking its content type under **Review** tab on the dashboard. From the review screen, you can alter the applied tags and apply your own custom tags as appropriate. When you submit a review, the results are sent to a specified callback endpoint, and the content is removed from the site.
-
-> [!div class="mx-imgBorder"]
-> ![The Review drop-down menu is highlighted. It shows these content types: Image, Text, and Video.](./Review-Tool-user-Guide/images/review-tab.png)
-
-### Manage reviews
-
-From the dashboard, navigate to **Admin** -> **Manage Reviews** to view the admin screen. Here, you can see a list of all reviews (pending and completed).
-
-The three-dot **Actions** button on each review lets you go to the review screen or inspect the history of that review.
-
-> [!div class="mx-imgBorder"]
-> ![Review tool website, on the Review screen](./Review-Tool-user-Guide/images/manage-reviews.png)
-
-Use the **Search** toolbar to sort the reviews by a variety of categories such as review state, tags, content type, subteams, assigned users, and created/modified date.
-
-> [!div class="mx-imgBorder"]
-> ![The Search toolbar is shown. It has various combo boxes for entering search criteria, such as Review State and Tags.](./Review-Tool-user-Guide/images/review-search.png)
-
-See the [Review tool guide](./review-tool-user-guide/review-moderated-images.md) to get started creating reviews, or see the [API console guide](./try-review-api-review.md) to learn how to do so programmatically.
-
-## Workflows
-
-A workflow is a cloud-based customized filter for content. Workflows can connect to a variety of services to filter content in different ways and then take the appropriate action. With the Content Moderator connector, a workflow can automatically apply moderation tags and create reviews with submitted content.
-
-### View workflows
-
-To view your existing workflows, go to the [Review tool](https://contentmoderator.cognitive.microsoft.com/) and select **Admin** > **Workflows**.
-
-> [!div class="mx-imgBorder"]
-> ![Default workflow](images/default-workflow-list.png)
-
-Workflows are defined as JSON strings, which makes them accessible programmatically. If you select the **Edit** option for your workflow and then select the **JSON** tab, you'll see a JSON expression like the following:
-
-```json
-{
- "Type": "Logic",
- "If": {
- "ConnectorName": "moderator",
- "OutputName": "isAdult",
- "Operator": "eq",
- "Value": "true",
- "Type": "Condition"
- },
- "Then": {
- "Perform": [
- {
- "Name": "createreview",
- "CallbackEndpoint": null,
- "Tags": []
- }
- ],
- "Type": "Actions"
- }
-}
-```
-
-See the [Review tool guide](./review-tool-user-guide/workflows.md) to get started creating and using workflows, or see the [API console guide](./try-review-api-workflow.md) to learn how to do so programmatically.
-
-## Jobs
-
-A moderation job serves as a kind of wrapper for the functionality of content moderation, workflows, and reviews. The job scans your content using the Content Moderator image moderation API or text moderation API and then checks it against the designated workflow. Based on the workflow results, it may or may not create a review for the content in the [Review tool](./review-tool-user-guide/human-in-the-loop.md). While both reviews and workflows can be created and configured with their respective APIs, the job API allows you to obtain a detailed report of the entire process (which can be sent to a specified callback endpoint).
-
-See the [API console guide](./try-review-api-job.md) to get started using jobs.
-
-## Next steps
-
-* Test drive the [Job API console](try-review-api-job.md), and use the REST API code samples. If you're familiar with Visual Studio and C#, also check out the [Jobs .NET quickstart](moderation-jobs-quickstart-dotnet.md).
-* For reviews, get started with the [Review API console](try-review-api-review.md), and use the REST API code samples. Then see the reviews section of the [.NET quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
-* For video reviews, use the [Video review quickstart](video-reviews-quickstart-dotnet.md), and learn how to [add transcripts to the video review](video-transcript-reviews-quickstart-dotnet.md).
cognitive-services Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/samples-dotnet.md
ms.devlang: csharp
The following list includes links to the code samples built using the Azure Content Moderator SDK for .NET.
-## Moderation
- - **Image moderation**: [Evaluate an image for adult and racy content, text, and faces](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp). - **Custom images**: [Moderate with custom image lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageListManagement/Program.cs). See the[.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
The following list includes links to the code samples built using the Azure Cont
- **Video moderation**: [Scan a video for adult and racy content and get results](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/VideoModeration/Program.cs). See [quickstart](video-moderation-api.md).
-## Review
--- **Image jobs**: [Start a moderation job that scans and creates reviews](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageJobs/Program.cs). See [quickstart](moderation-jobs-quickstart-dotnet.md).-- **Image reviews**: [Create reviews for human-in-the-loop](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageReviews/Program.cs). See [quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).-- **Video reviews**: [Create video reviews for human-in-the-loop](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/VideoReviews/Program.cs). See [quickstart](video-reviews-quickstart-dotnet.md)-- **Video transcript reviews**: [Create video transcript reviews for human-in-the-loop](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/VideoTranscriptReviews/Program.cs) See [quickstart](video-reviews-quickstart-dotnet.md)
-See all .NET samples at the [Content Moderator .NET samples on GitHub](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator).
+See all .NET samples at the [Content Moderator .NET samples on GitHub](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator).
cognitive-services Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/samples-rest.md
The following list includes links to code samples built using the Azure Content
- [Image moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageModeration) - [Text moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/TextModeration) - [Video moderation](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/VideoModeration)-- [Image reviews](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageReviews)-- [Image jobs](https://github.com/MicrosoftContentModerator/ContentModerator-API-Samples/tree/master/ImageJob) For walkthroughs of these samples, check out the [on-demand webinar](https://info.microsoft.com/cognitive-services-content-moderator-ondemand.html).
cognitive-services Term Lists Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/term-lists-quickstart-dotnet.md
# Check text against a custom term list in C#
-The default global list of terms in Azure Content Moderator is sufficient for most content moderation needs. However, you might need to screen for terms that are specific to your organization. For example, you might want to tag competitor names for further review.
+The default global list of terms in Azure Content Moderator is sufficient for most content moderation needs. However, you might need to screen for terms that are specific to your organization.
You can use the [Content Moderator SDK for .NET](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to create custom lists of terms to use with the Text Moderation API.
cognitive-services Text Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/text-moderation-api.md
# Learn text moderation concepts
-Use Content Moderator's text moderation models to analyze text content.
-
-You can block, approve or review the content based on your policies and thresholds (see [Reviews, workflows, and jobs](./review-api.md) to learn how to set up human reviews). Use the text moderation models to augment human moderation of environments where partners, employees and consumers generate text content. These include chat rooms, discussion boards, chatbots, e-commerce catalogs, and documents.
+Use Content Moderator's text moderation models to analyze text content, such as chat rooms, discussion boards, chatbots, e-commerce catalogs, and documents.
The service response includes the following information:
If the API detects any profane terms in any of the [supported languages](./langu
## Classification
-Content Moderator's machine-assisted **text classification feature** supports **English only**, and helps detect potentially undesired content. The flagged content may be assessed as inappropriate depending on context. It conveys the likelihood of each category and may recommend a human review. The feature uses a trained model to identify possible abusive, derogatory or discriminatory language. This includes slang, abbreviated words, offensive, and intentionally misspelled words for review.
+Content Moderator's machine-assisted **text classification feature** supports **English only**, and helps detect potentially undesired content. The flagged content may be assessed as inappropriate depending on context. It conveys the likelihood of each category. The feature uses a trained model to identify possible abusive, derogatory or discriminatory language. This includes slang, abbreviated words, offensive, and intentionally misspelled words.
The following extract in the JSON extract shows an example output:
The Content Moderator provides a [Term List API](https://westus.dev.cognitive.mi
## Next steps
-Test out the APIs with the [Text moderation API console](try-text-api.md). Also see [Reviews, workflows, and jobs](./review-api.md) to learn how to set up human reviews.
+Test out the APIs with the [Text moderation API console](try-text-api.md).
cognitive-services Try Image Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/try-image-api.md
Title: Moderate images with the API Console - Content Moderator
-description: Use the Image Moderation API in Azure Content Moderator to initiate scan-and-review moderation workflows for image content.
+description: Use the Image Moderation API in Azure Content Moderator to scan image content.
# Moderate images from the API console
-Use the [Image Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c) in Azure Content Moderator to initiate scan-and-review moderation workflows for image content. The moderation job scans your content for profanity, and compares it against custom and shared blocklists.
+Use the [Image Moderation API](https://westus.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66c) in Azure Content Moderator to scan image content. The moderation job scans your content for profanity, and compares it against custom and shared blocklists.
## Use the API console Before you can test-drive the API in the online console, you need your subscription key. This is located on the **Settings** tab, in the **Ocp-Apim-Subscription-Key** box. For more information, see [Overview](overview.md).
cognitive-services Try Review Api Job https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/try-review-api-job.md
- Title: Use moderation jobs with the REST API console - Content Moderator-
-description: Use the Review API's job operations to initiate end-to-end content moderation jobs for image or text content in Azure Content Moderator.
------- Previously updated : 10/24/2019-
-#The Jobs how-to for REST/console
--
-# Define and use moderation jobs (API console)
--
-A moderation job serves as a kind of wrapper for the functionality of content moderation, workflows, and reviews. This guide shows you how to use the job REST APIs to initiate and check content moderation jobs. Once you understand the structure of the APIs, you can easily port these calls to any REST-compatible platform.
-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-- (Optional) [Define a custom workflow](./Review-Tool-User-Guide/Workflows.md) to use with your job; you can also use the default workflow.-
-## Create a job
-
-To create a moderation job, go to the [Job - Create](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c5) API reference page and select the button for your subscription region. You can find your region in the Endpoint URL on the **Credentials** page of the [Review tool](https://contentmoderator.cognitive.microsoft.com/). This starts the API console, where you can easily construct and run REST API calls.
-
-![Job - Create page region selection](images/test-drive-job-1.png)
-
-### Enter REST call parameters
-
-Enter the following values to construct the REST call:
--- **teamName**: The team ID that you created when you set up your [Review tool](https://contentmoderator.cognitive.microsoft.com/) account (found in the **Id** field on your Review tool's Credentials screen).-- **ContentType**: This can be "Image", "Text", or "Video".-- **ContentId**: A custom identifier string. This string is passed to the API and returned through the callback. It is useful for associating internal identifiers or metadata with the results of a moderation job.-- **Workflowname**: The name of the workflow you previously created (or "default" for the default workflow).-- **CallbackEndpoint**: (Optional) The URL to receive callback information when the review is completed.-- **Ocp-Apim-Subscription-Key**: Your Content Moderator key. You can find this key on the **Settings** tab of the [Review tool](https://contentmoderator.cognitive.microsoft.com).-
-### Fill in the request body
-
-The body of your REST call contains one field, **ContentValue**. Paste in the raw text content if you are moderating text, or enter an image or video URL if you're moderating an image or video. You can use the following sample image URL: [https://moderatorsampleimages.blob.core.windows.net/samples/sample2.jpg](https://moderatorsampleimages.blob.core.windows.net/samples/sample2.jpg)
-
-![Job - Create console query parameters, headers, and Request body box](images/job-api-console-inputs.PNG)
-
-### Submit your request
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays an ID for the job. Copy this ID to use in the following steps.
-
-![Review - Create console Response content box displays the review ID](images/test-drive-job-3.PNG)
-
-## Get job status
-
-To get the status and details of a running or completed job, go to the [Job - Get](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c3) API reference page and select the button for your region (the region in which your key is administered).
-
-![Job - Get region selection](images/test-drive-region.png)
-
-Enter the REST call parameters as in the above section. For this step, **JobId** is the unique ID string you received when you created the job. Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays the job in JSON format, like the following:
-
-```json
-{
- "Id":"2018014caceddebfe9446fab29056fd8d31ffe",
- "TeamName":"some team name",
- "Status":"Complete",
- "WorkflowId":"OCR",
- "Type":"Image",
- "CallBackEndpoint":"",
- "ReviewId":"201801i28fc0f7cbf424447846e509af853ea54",
- "ResultMetaData":[
- {
- "Key":"hasText",
- "Value":"True"
- },
- {
- "Key":"ocrText",
- "Value":"IF WE DID \r\nALL \r\nTHE THINGS \r\nWE ARE \r\nCAPABLE \r\nOF DOING, \r\nWE WOULD \r\nLITERALLY \r\nASTOUND \r\nOURSELVE \r\n"
- }
- ],
- "JobExecutionReport":[
- {
- "Ts":"2018-01-07T00:38:29.3238715",
- "Msg":"Posted results to the Callbackendpoint: https://requestb.in/vxke1mvx"
- },
- {
- "Ts":"2018-01-07T00:38:29.2928416",
- "Msg":"Job marked completed and job content has been removed"
- },
- {
- "Ts":"2018-01-07T00:38:29.0856472",
- "Msg":"Execution Complete"
- },
- {
- "Ts":"2018-01-07T00:38:26.7714671",
- "Msg":"Successfully got hasText response from Moderator"
- },
- {
- "Ts":"2018-01-07T00:38:26.4181346",
- "Msg":"Getting hasText from Moderator"
- },
- {
- "Ts":"2018-01-07T00:38:25.5122828",
- "Msg":"Starting Execution - Try 1"
- }
- ]
-}
-```
-
-![Job - Get REST call response](images/test-drive-job-5.png)
-
-### Examine the new review(s)
-
-If your content job resulted in the creation of a review, you can view it in the [Review tool](https://contentmoderator.cognitive.microsoft.com). Select **Review** > **Image**/**Text**/**Video** (depending on what content you used). The content should appear, ready for human review. After a human moderator reviews the auto-assigned tags and prediction data and submits a final moderation decision, the jobs API submits all of this information to the designated callback endpoint.
-
-## Next steps
-
-In this guide, you learned how to create and query content moderation jobs using the REST API. Next, integrate jobs into an end-to-end moderation scenario, such as the [E-commerce moderation](./ecommerce-retail-catalog-moderation.md) tutorial.
cognitive-services Try Review Api Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/try-review-api-review.md
- Title: Create moderation reviews with REST API console - Content Moderator-
-description: Use the Azure Content Moderator Review APIs to create image or text reviews for human moderation.
------- Previously updated : 03/18/2019-
-# reviews how-to for REST API
--
-# Create human reviews (API console)
--
-[Reviews](./review-api.md#reviews) store and display content for human moderators to assess. When a user completes a review, the results are sent to a specified callback endpoint. In this guide, you'll learn how to set up reviews using the review REST APIs through the API console. Once you understand the structure of the APIs, you can easily port these calls to any REST-compatible platform.
-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-
-## Create a review
-
-To create a review, go to the **[Review - Create](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4)** API reference page and select the button for your key region (you can find this in the Endpoint URL on the **Credentials** page of the [Review tool](https://contentmoderator.cognitive.microsoft.com/)). This starts the API console, where you can easily construct and run REST API calls.
-
-![Review - Get region selection](images/test-drive-region.png)
-
-### Enter REST call parameters
-
-Enter values for **teamName**, and **Ocp-Apim-Subscription-Key**:
--- **teamName**: The team ID that you created when you set up your [Review tool](https://contentmoderator.cognitive.microsoft.com/) account (found in the **Id** field on your Review tool's Credentials screen).-- **Ocp-Apim-Subscription-Key**: Your Content Moderator key. You can find this on the **Settings** tab of the [Review tool](https://contentmoderator.cognitive.microsoft.com).-
-### Enter a review definition
-
-Edit the **Request body** box to enter the JSON request with the following fields:
--- **Metadata**: Custom key-value pairs to be returned to your callback endpoint. If the key is a short code that is defined in the [Review tool](https://contentmoderator.cognitive.microsoft.com), it appears as a tag.-- **Content**: In the case of Image and Video content, this is a URL string pointing to the content. For text content, this is the actual text string.-- **ContentId**: A custom identifier string. This string is passed to the API and returned through the callback. It is useful for associating internal identifiers or metadata with the results of a moderation job.-- **CallbackEndpoint**: (Optional) The URL to receive callback information when the review is completed.-
-The default request body shows examples of the different types of reviews you can create:
-
-```json
-[Image]
-[
- {
- "Metadata": [
- {
- "Key": "string",
- "Value": "string"
- }
- ],
- "Type": "Image",
- "Content": "<Content Url>",
- "ContentId": "<Your identifier for this content>",
- "CallbackEndpoint": "<Url where you would receive callbacks>"
- }
-]
-[Text]
-[
- {
- "Metadata": [
- {
- "Key": "string",
- "Value": "string"
- }
- ],
- "Type": "Text",
- "Content": "<Your Text Content>",
- "ContentId": "<Your identifier for this content>",
- "CallbackEndpoint": "<Url where you would receive callbacks>"
- }
-]
-[Video]
-[
- {
- "VideoFrames":[
- {
- "Id": "<Frame Id>",
- "Timestamp": "<Frame Timestamp",
- "FrameImage":"<Frame Image URL",
- "Metadata": [
- {
- "Key": "<Key>",
- "Value": "<Value"
- }
- ],
- "ReviewerResultTags": [
- ]
- ],
- "Metadata": [
- {
- "Key": "string",
- "Value": "string"
- },
- //For encrypted Videos
- {
- "Key": "protectedType",
- "Value": "AES or FairPlay or Widevine or Playready"
- },
- {
- "Key": "authenticationToken",
- "Value": "your viewtoken(In case of Video Indexer AES encryption type, this value is viewtoken from breakdown json)"
- },
- //For FairPlay encrypted type video include certificateUrl as well
- {
- "Key": "certificateUrl",
- "Value": "your certificate url"
- }
- ],
- "Type": "Video",
- "Content": "<Stream Url>",
- "ContentId": "<Your identifier for this content>",
- "CallbackEndpoint": "<Url where you would receive callbacks>",
- [Optional]
- "Timescale": "<Timescale of the video>
- }
-]
-```
-
-### Submit your request
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays an ID for the review. Copy this ID to use in the following steps.
-
-![Review - Create console Response content box displays the review ID](images/test-drive-review-2.PNG)
-
-### Examine the new review
-
-In the [Review tool](https://contentmoderator.cognitive.microsoft.com), select **Review** > **Image**/**Text**/**Video** (depending on what content you used). The content that you uploaded should appear, ready for human review.
-
-![Review tool image of a soccer ball](images/test-drive-review-5.PNG)
-
-## Get review details
-
-To retrieve details about an existing review, go to the [Review - Get](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c2) API reference page and select the button for your region (the region in which your key is administered).
-
-![Workflow - Get region selection](images/test-drive-region.png)
-
-Enter the REST call parameters as in the above section. For this step, **reviewId** is the unique ID string you received when you created the review.
-
-![Review - Create console Get results](images/test-drive-review-3.PNG)
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays the review details in JSON format, like the following:
-
-```json
-{
- "reviewId":"201712i46950138c61a4740b118a43cac33f434",
- "subTeam":"public",
- "status":"Complete",
- "reviewerResultTags":[
- {
- "key":"a",
- "value":"False"
- },
- {
- "key":"r",
- "value":"True"
- },
- {
- "key":"sc",
- "value":"True"
- }
- ],
- "createdBy":"<teamname>",
- "metadata":[
- {
- "key":"sc",
- "value":"true"
- }
- ],
- "type":"Image",
- "content":"https://reviewcontentprod.blob.core.windows.net/<teamname>/IMG_201712i46950138c61a4740b118a43cac33f434",
- "contentId":"0",
- "callbackEndpoint":"<callbackUrl>"
-}
-```
-
-Take note of the following fields in the response:
--- **status**-- **reviewerResultTags**: This appears if any tags have been manually added by the human review team (shown the **createdBy** field).-- **metadata**: This shows the tags that were initially added in the review, before the human review team made changes.-
-## Next steps
-
-In this guide, you learned how to create content moderation reviews using the REST API. Next, integrate reviews into an end-to-end moderation scenario, such as the [E-commerce moderation](./ecommerce-retail-catalog-moderation.md) tutorial.
cognitive-services Try Review Api Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/try-review-api-workflow.md
- Title: Define moderation workflows with the REST API console - Content Moderator-
-description: You can use the Azure Content Moderator Review APIs to define custom workflows and thresholds based on your content policies.
------- Previously updated : 03/14/2019-
-#Customer intent: use workflows from the REST API
--
-# Define and use moderation workflows (API console)
--
-Workflows are cloud-based customized filters that you can use to handle content more efficiently. Workflows can connect to a variety of services to filter content in different ways and then take the appropriate action. This guide shows you how to use the workflow REST APIs, through the API console, to create and use workflows. Once you understand the structure of the APIs, you can easily port these calls to any REST-compatible platform.
-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-
-## Create a workflow
-
-To create or update a workflow, go to the **[Workflow - Create Or Update](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b46b3f9b0711b43c4c59)** API reference page and select the button for your key region. You can find your region in the Endpoint URL on the **Credentials** page of the [Review tool](https://contentmoderator.cognitive.microsoft.com/). This starts the API console, where you can easily construct and run REST API calls.
-
-![Workflow - Create Or Update page region selection](images/test-drive-region.png)
-
-### Enter REST call parameters
-
-Enter values for **team**, **workflowname**, and **Ocp-Apim-Subscription-Key**:
--- **team**: The team ID that you created when you set up your [Review tool](https://contentmoderator.cognitive.microsoft.com/) account (found in the **Id** field on your Review tool's Credentials screen).-- **workflowname**: The name of a new workflow to add (or an existing name, if you want to update an existing workflow).-- **Ocp-Apim-Subscription-Key**: Your Content Moderator key. You can find this key on the **Settings** tab of the [Review tool](https://contentmoderator.cognitive.microsoft.com).-
-![Workflow - Create Or Update console query parameters and headers](images/workflow-console-parameters.PNG)
-
-### Enter a workflow definition
-
-1. Edit the **Request body** box to enter the JSON request with details for **Description** and **Type** (either `Image` or `Text`).
-2. For **Expression**, copy the default workflow JSON expression. Your final JSON string should look like this:
-
-```json
-{
- "Description":"<A description for the Workflow>",
- "Type":"Text",
- "Expression":{
- "Type":"Logic",
- "If":{
- "ConnectorName":"moderator",
- "OutputName":"isAdult",
- "Operator":"eq",
- "Value":"true",
- "Type":"Condition"
- },
- "Then":{
- "Perform":[
- {
- "Name":"createreview",
- "CallbackEndpoint":null,
- "Tags":[
-
- ]
- }
- ],
- "Type":"Actions"
- }
- }
-}
-```
-
-> [!NOTE]
-> You can define simple, complex, and even nested expressions for your workflows using this API. The [Workflow - Create Or Update](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b46b3f9b0711b43c4c59) documentation has examples of more complex logic.
-
-### Submit your request
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays `true`.
-
-### Examine the new workflow
-
-In the [Review tool](https://contentmoderator.cognitive.microsoft.com/), select **Settings** > **Workflows**. Your new workflow should appear in the list.
-
-![Review tool list of workflows](images/workflow-console-new-workflow.PNG)
-
-Select the **Edit** option for your workflow and go to the **Designer** tab. Here, you can see an intuitive representation of the JSON logic.
-
-![Designer tab for a selected workflow](images/workflow-console-new-workflow-designer.PNG)
-
-## Get workflow details
-
-To retrieve details about an existing workflow, go to the **[Workflow - Get](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/5813b44b3f9b0711b43c4c58)** API reference page and select the button for your region (the region in which your key is administered).
-
-![Workflow - Get region selection](images/test-drive-region.png)
-
-Enter the REST call parameters as in the above section. Make sure that this time, **workflowname** is the name of an existing workflow.
-
-![Get query parameters and headers](images/workflow-get-default.PNG)
-
-Select **Send**. If the operation succeeds, the **Response status** is `200 OK`, and the **Response content** box displays the workflow in JSON format, like the following example:
-
-```json
-{
- "Name":"default",
- "Description":"Default",
- "Type":"Image",
- "Expression":{
- "If":{
- "ConnectorName":"moderator",
- "OutputName":"isadult",
- "Operator":"eq",
- "Value":"true",
- "AlternateInput":null,
- "Type":"Condition"
- },
- "Then":{
- "Perform":[
- {
- "Name":"createreview",
- "Subteam":null,
- "CallbackEndpoint":null,
- "Tags":[
-
- ]
- }
- ],
- "Type":"Actions"
- },
- "Else":null,
- "Type":"Logic"
- }
-}
-```
-
-## Next steps
--- Learn how to use workflows with [content moderation jobs](try-review-api-job.md).
cognitive-services Video Moderation Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/video-moderation-api.md
After the Content Moderation job is completed, analyze the JSON response. It con
## Next steps
-Learn how to generate [video reviews](video-reviews-quickstart-dotnet.md) from your moderation output.
- [Download the Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET.
cognitive-services Video Moderation Human Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/video-moderation-human-review.md
- Title: Video moderation with the Review tool - Content Moderator-
-description: Use machine-assisted video moderation and the Review tool to moderate inappropriate content
------- Previously updated : 07/20/2020---
-# Video moderation with the Review tool
--
-Use Content Moderator's machine-assisted [video moderation](video-moderation-api.md) and [Review tool](Review-Tool-User-Guide/human-in-the-loop.md) to moderate videos and transcripts for adult (explicit) and racy (suggestive) content to get the best results for your business.
-
-## View videos under review
-
-On the dashboard, select any of the review queues within the video content type. This will start a review and open the video content moderation page.
-
-> [!div class="mx-imgBorder"]
-> ![On Content Moderator (Preview), the slider is highlighted and set to 4 reviews. The Blur All and Black and white switches are highlighted, and both are set.](./Review-Tool-User-Guide/images/video-moderation-detailed.png)
-
-### Review count
-
-Use the slider in the upper right to set the number of reviews you'd like to display on the page.
-
-### View type
-
-You can view the different content entries as tiles or in a detailed view. The **Detail** view will allow you to see key frames and other information about the selected video.
-
-> [!NOTE]
-> Instead of outputting frames at regular intervals, the video moderation service identifies and outputs only potentially complete (good) frames. This feature allows efficient frame generation for frame-level adult and racy analysis.
-
-The **Tiled** view will show each video as a single tile. Select the expand button above a video frame to enlarge that video and hide the others.
-
-### Content-obscuring effects
-
-Use the **Blur all** and **Black and white** toggles to set these content-obscuring effects. They're turned on by default. In the **Tiled** view, you can toggle the effects individually for each video.
-
-## Check video details
-
-In the **Detail** view, the right pane will show several tabs that give you details about the video.
-
-* Select the **Notes** tab to add custom notes to videos.
-* Select the **Transcript** tab to see the video transcript&mdash;the service automatically extracts a transcript of any speech in the video. When you select a section of text, the video player will jump to that part of the video.
-* Select the **Meta-data** tab to view video file metadata.
-* Select the **History** tab to see the history of the review, such as when it was created and how it was modified.
-
-> [!div class="mx-imgBorder"]
-> ![The right pane is highlighted, and the Notes tab is selected. There is a test area labelled Add a note.](./Review-Tool-User-Guide/images/video-moderation-video-details.png)
-
-## Apply moderation tags
-
-The main task of a video review is to apply or remove moderation tags on videos or parts of videos.
-
-### Bulk tagging
-
-The **Bulk Tags** toolbar lets you add tags to multiple selected videos at once. Select one or more videos, then select the tags you would like to apply and click **submit**.
-
-> [!div class="mx-imgBorder"]
-> ![The + button is highlighted in the Bulk Tags pane.](./Review-Tool-User-Guide/images/video-moderation-bulk-tags.png)
--
-### Key frame tagging
-
-You can also add moderation tags to specific key frames. Select the frames from the key frame tile pane, and then select **Keyframe tags +** to apply the wanted tags.
-
-> [!NOTE]
-> If the service couldn't extract key frames, the key frame tile pane will show **No frames available** and the option to select key frames will be grayed out. In this case, you can only apply tags to the video as a whole (using the **Video tags +** button).
-
-> [!div class="mx-imgBorder"]
-> ![The tile pane, video player, Keyframe tags pane, and Video tags panes are all shown. The Keyframe tags + and Video tags + buttons are highlighted.](./Review-Tool-User-Guide/images/video-moderation-tagging-options.png)
-
-## Put a review on hold
-
-The **Hold** button at the bottom of the video pane lets you put a review on hold so you can retrieve it and complete it later. You may do this for a review that requires a consult from another team member or manager who is currently unavailable.
-
-You can view the videos on hold by clicking the **Hold** button at the top of the screen. The Hold pane appears on the right. From here, you can select multiple reviews on hold and either release them back into the queue, or set their expiration time. After the preconfigured amount of time, reviews on hold are released back to the queue. Select **Save** to start counting down from the currently selected expiration time.
-
-> [!div class="mx-imgBorder"]
-> ![On the video pane, the Hold button is highlighted. At the bottom of the pane, a Hold Time combo box is highlighted, along with Release and Save buttons.](./Review-Tool-User-Guide/images/video-moderation-hold.png)
-
-## Submit a review
-
-After you've applied your tags, select the **Submit** button at the bottom of the video pane. If you've tagged multiple videos, you can submit them under a single review or as separate reviews.
-
-## Limbo state
-
-After you've submitted a review, the video is moved to the **Limbo** state, which you can view by selecting the **Limbo** button at the top of the screen. Videos remain in the Limbo state for a preconfigured amount of time (which you can change in the menu at the bottom), or until they're reviewed again or manually submitted.
-
-Once the videos expire from limbo, their reviews are marked as complete.
-
-## Next steps
--- Get started with the [video moderation quickstart](video-moderation-api.md).-- Learn how to generate [video reviews](video-reviews-quickstart-dotnet.md) for your human reviewers from your moderated output.-- Add [video transcript reviews](video-transcript-reviews-quickstart-dotnet.md) to your video reviews.
cognitive-services Video Reviews Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/video-reviews-quickstart-dotnet.md
- Title: Create video reviews using .NET - Content Moderator-
-description: This article provides information and code samples to help you quickly get started using the Content Moderator SDK with C# to create video reviews.
------- Previously updated : 10/24/2019-----
-# Create video reviews using .NET
--
-This article provides information and code samples to help you quickly get started using the [Content Moderator SDK with C#](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to:
--- Create a video review for human moderators-- Add frames to a review-- Get the frames for the review-- Get the status and details of the review-- Publish the review-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site.-- This article assumes that you have [moderated the video (see quickstart)](video-moderation-api.md) and have the response data. You need it for creating frame-based reviews for human moderators.-
-## Ensure your API key can call the review API for review creation
-
-After completing the previous steps, you may end up with two Content Moderator keys if you started from the Azure portal.
-
-If you plan to use the Azure-provided API key in your SDK sample, follow the steps mentioned in the [Using Azure key with the review API](review-tool-user-guide/configure.md#use-your-azure-account-with-the-review-apis) section to allow your application to call the review API and create reviews.
-
-If you use the free trial key generated by the review tool, your review tool account already knows about the key and therefore, no additional steps are required.
-
-### Prepare your video and the video frames for review
-
-The video and sample video frames to review must be published online because you need their URLs.
-
-> [!NOTE]
-> The program uses manually saved screenshots from the video with random adult/racy scores to illustrate the
-> use of the review API. In a real-world situation, you use the [video moderation output](video-moderation-api.md#run-the-program-and-review-the-output) to create images and assign scores.
-
-For the video, you need a streaming endpoint so that the review tool plays the video in the player view.
-
-![Video demo thumbnail](images/ams-video-demo-view.PNG)
--- Copy the **URL** on this [Azure Media Services demo](https://aka.ms/azuremediaplayer?url=https%3A%2F%2Famssamples.streaming.mediaservices.windows.net%2F91492735-c523-432b-ba01-faba6c2206a2%2FAzureMediaServicesPromo.ism%2Fmanifest) page for the manifest URL.-
-For the video frames (images), use the following images:
-
-![Video frame thumbnail 1](images/ams-video-frame-thumbnails-1.PNG) | ![Video frame thumbnail 2](images/ams-video-frame-thumbnails-2.PNG) | ![Video frame thumbnail 3](images/ams-video-frame-thumbnails-3.PNG) |
-| :: | :: | :: |
-Frame 1 | Frame 2 | Frame 3 |
-
-## Create your Visual Studio project
-
-1. Add a new **Console app (.NET Framework)** project to your solution.
-
-1. Name the project **VideoReviews**.
-
-1. Select this project as the single startup project for the solution.
-
-### Install required packages
-
-Install the following NuGet packages for the TermLists project.
--- Microsoft.Azure.CognitiveServices.ContentModerator-- Microsoft.Rest.ClientRuntime-- Microsoft.Rest.ClientRuntime.Azure-- Newtonsoft.Json-
-### Update the program's using statements
-
-Modify the program's using statements as follows.
-
-```csharp
-using System;
-using System.Collections.Generic;
-using System.IO;
-using System.Threading;
-using Microsoft.Azure.CognitiveServices.ContentModerator;
-using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
-using Newtonsoft.Json;
-```
-
-### Add private properties
-
-Add the following private properties to namespace **VideoReviews**, class **Program**. Update the `AzureEndpoint` and `CMSubscriptionKey` fields with the values of your endpoint URL and subscription key. You can find these in the **Quick start** tab of your resource in the Azure portal.
--
-```csharp
-namespace VideoReviews
-{
- class Program
- {
- // NOTE: Enter a valid endpoint URL
- /// <summary>
- /// The endpoint URL of your subscription
- /// </summary>
- private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
-
- // NOTE: Enter a valid subscription key.
- /// <summary>
- /// Your Content Moderator subscription key.
- /// </summary>
- private static readonly string CMSubscriptionKey = "YOUR CONTENT MODERATOR KEY";
-
- // NOTE: Replace this example team name with your Content Moderator team name.
- /// <summary>
- /// The name of the team to assign the job to.
- /// </summary>
- /// <remarks>This must be the team name you used to create your
- /// Content Moderator account. You can retrieve your team name from
- /// the Content Moderator web site. Your team name is the Id associated
- /// with your subscription.</remarks>
- private const string TeamName = "YOUR CONTENT MODERATOR TEAM ID";
-
- /// <summary>
- /// The minimum amount of time, in milliseconds, to wait between calls
- /// to the Content Moderator APIs.
- /// </summary>
- private const int throttleRate = 2000;
-```
-
-### Create Content Moderator Client object
-
-Add the following method definition to namespace **VideoReviews**, class **Program**.
-
-```csharp
-/// <summary>
-/// Returns a new Content Moderator client for your subscription.
-/// </summary>
-/// <returns>The new client.</returns>
-/// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
-/// When you have finished using the client,
-/// you should dispose of it either directly or indirectly. </remarks>
-public static ContentModeratorClient NewClient()
-{
- return new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey))
- {
- Endpoint = AzureEndpoint
- };
-}
-```
-
-## Create a video review
-
-Create a video review with **ContentModeratorClient.Reviews.CreateVideoReviews**. For more information, see the [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4).
-
-**CreateVideoReviews** has the following required parameters:
-1. A string that contains a MIME type, which should be "application/json."
-1. Your Content Moderator team name.
-1. An **IList\<CreateVideoReviewsBodyItem>** object. Each **CreateVideoReviewsBodyItem** object represents a video review. This quickstart creates one review at a time.
-
-**CreateVideoReviewsBodyItem** has several properties. At a minimum, you set the following properties:
-- **Content**. The URL of the video to be reviewed.-- **ContentId**. An ID to assign to the video review.-- **Status**. Set the value to "Unpublished." If you do not set it, it defaults to "Pending", which means the video review is published and pending human review. Once a video review is published, you can no longer add video frames, a transcript, or a transcript moderation result to it.-
-> [!NOTE]
-> **CreateVideoReviews** returns an IList\<string>. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Create a video review. For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="id">The ID to assign to the video review.</param>
-/// <param name="content">The URL of the video to review.</param>
-/// <returns>The ID of the video review.</returns>
-private static string CreateReview(ContentModeratorClient client, string id, string content)
-{
- Console.WriteLine("Creating a video review.");
-
- List<CreateVideoReviewsBodyItem> body = new List<CreateVideoReviewsBodyItem>() {
- new CreateVideoReviewsBodyItem
- {
- Content = content,
- ContentId = id,
- /* Note: to create a published review, set the Status to "Pending".
- However, you cannot add video frames or a transcript to a published review. */
- Status = "Unpublished",
- }
- };
-
- var result = client.Reviews.CreateVideoReviews("application/json", TeamName, body);
-
- Thread.Sleep(throttleRate);
-
- // We created only one review.
- return result[0];
-}
-```
-
-> [!NOTE]
-> Your Content Moderator service key has a requests per second (RPS) rate limit, and if you exceed the limit, the SDK throws an exception with a 429 error code.
->
-> A free tier key has a one RPS rate limit.
-
-## Add video frames to the video review
-
-You add video frames to a video review with **ContentModeratorClient.Reviews.AddVideoFrameUrl** (if your video frames are hosted online) or **ContentModeratorClient.Reviews.AddVideoFrameStream** (if your video frames are hosted locally). This quickstart assumes your video frames are hosted online, and so uses **AddVideoFrameUrl**. For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b76ae7151f0b10d451fd).
-
-**AddVideoFrameUrl** has the following required parameters:
-1. A string that contains a MIME type, which should be "application/json."
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-1. An **IList\<VideoFrameBodyItem>** object. Each **VideoFrameBodyItem** object represents a video frame.
-
-**VideoFrameBodyItem** has the following properties:
-- **Timestamp**. A string that contains, in seconds, the time in the video from which the video frame was taken.-- **FrameImage**. The URL of the video frame.-- **Metadata**. An IList\<VideoFrameBodyItemMetadataItem>. **VideoFrameBodyItemMetadataItem** is simply a key/value pair. Valid keys include:-- **reviewRecommended**. True if a human review of the video frame is recommended.-- **adultScore**. A value from 0 to 1 that rates the severity of adult content in the video frame.-- **a**. True if the video contains adult content.-- **racyScore**. A value from 0 to 1 that rates the severity of racy content in the video frame.-- **r**. True if the video frame contains racy content.-- **ReviewerResultTags**. An IList\<VideoFrameBodyItemReviewerResultTagsItem>. **VideoFrameBodyItemReviewerResultTagsItem** is simply a key/value pair. An application can use these tags to organize video frames.-
-> [!NOTE]
-> This quickstart generates random values for the **adultScore** and **racyScore** properties. In a production application, you would obtain these values from the [video moderation service](video-moderation-api.md), deployed as an Azure Media Service.
-
-Add the following method definitions to namespace VideoReviews, class Program.
-
-```csharp
-<summary>
-/// Create a video frame to add to a video review after the video review is created.
-/// </summary>
-/// <param name="url">The URL of the video frame image.</param>
-/// <returns>The video frame.</returns>
-private static VideoFrameBodyItem CreateFrameToAddToReview(string url, string timestamp_seconds)
-{
- // We generate random "adult" and "racy" scores for the video frame.
- Random rand = new Random();
-
- var frame = new VideoFrameBodyItem
- {
- // The timestamp is measured in milliseconds. Convert from seconds.
- Timestamp = (int.Parse(timestamp_seconds) * 1000).ToString(),
- FrameImage = url,
-
- Metadata = new List<VideoFrameBodyItemMetadataItem>
- {
- new VideoFrameBodyItemMetadataItem("reviewRecommended", "true"),
- new VideoFrameBodyItemMetadataItem("adultScore", rand.NextDouble().ToString()),
- new VideoFrameBodyItemMetadataItem("a", "false"),
- new VideoFrameBodyItemMetadataItem("racyScore", rand.NextDouble().ToString()),
- new VideoFrameBodyItemMetadataItem("r", "false")
- },
-
- ReviewerResultTags = new List<VideoFrameBodyItemReviewerResultTagsItem>()
- {
- new VideoFrameBodyItemReviewerResultTagsItem("tag1", "value1")
- }
- };
-
- return frame;
-}
-```
-
-```csharp
-/// <summary>
-/// Add a video frame to the indicated video review. For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b76ae7151f0b10d451fd
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-/// <param name="url">The URL of the video frame image.</param>
-static void AddFrame(ContentModeratorClient client, string review_id, string url, string timestamp_seconds)
-{
- Console.WriteLine("Adding a frame to the review with ID {0}.", review_id);
-
- var frames = new List<VideoFrameBodyItem>()
- {
- CreateFrameToAddToReview(url, timestamp_seconds)
- };
-
- client.Reviews.AddVideoFrameUrl("application/json", TeamName, review_id, frames);
-
- Thread.Sleep(throttleRate);
-```
-
-## Get video frames for video review
-
-You can get the video frames for a video review with **ContentModeratorClient.Reviews.GetVideoFrames**. **GetVideoFrames** has the following required parameters:
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-1. The zero-based index of the first video frame to get.
-1. The number of video frames to get.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Get the video frames assigned to the indicated video review. For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7ba43e7151f0b10d45200
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-static void GetFrames(ContentModeratorClient client, string review_id)
-{
- Console.WriteLine("Getting frames for the review with ID {0}.", review_id);
-
- Frames result = client.Reviews.GetVideoFrames(TeamName, review_id, 0);
- Console.WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- Thread.Sleep(throttleRate);
-}
-```
-
-## Get video review information
-
-You get information for a video review with **ContentModeratorClient.Reviews.GetReview**. **GetReview** has the following required parameters:
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Get the information for the indicated video review. For more information, see the reference API:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c2
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-private static void GetReview(ContentModeratorClient client, string review_id)
-{
- Console.WriteLine("Getting the status for the review with ID {0}.", review_id);
-
- var result = client.Reviews.GetReview(ModeratorHelper.Clients.TeamName, review_id);
- Console.WriteLine(JsonConvert.SerializeObject(result, Formatting.Indented));
-
- Thread.Sleep(throttleRate);
-}
-```
-
-## Publish video review
-
-You publish a video review with **ContentModeratorClient.Reviews.PublishVideoReview**. **PublishVideoReview** has the following required parameters:
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Publish the indicated video review. For more information, see the reference API:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7bb29e7151f0b10d45201
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-private static void PublishReview(ContentModeratorClient client, string review_id)
-{
- Console.WriteLine("Publishing the review with ID {0}.", review_id);
- client.Reviews.PublishVideoReview(TeamName, review_id);
- Thread.Sleep(throttleRate);
-}
-```
-
-## Putting it all together
-
-Add the **Main** method definition to namespace VideoReviews, class Program. Finally, close the Program class and the VideoReviews namespace.
-
-```csharp
-static void Main(string[] args)
-{
- using (ContentModeratorClient client = NewClient())
- {
- // Create a review with the content pointing to a streaming endpoint (manifest)
- var streamingcontent = "https://amssamples.streaming.mediaservices.windows.net/91492735-c523-432b-ba01-faba6c2206a2/AzureMediaServicesPromo.ism/manifest";
- string review_id = CreateReview(client, "review1", streamingcontent);
-
- var frame1_url = "https://blobthebuilder.blob.core.windows.net/sampleframes/ams-video-frame1-00-17.PNG";
- var frame2_url = "https://blobthebuilder.blob.core.windows.net/sampleframes/ams-video-frame-2-01-04.PNG";
- var frame3_url = "https://blobthebuilder.blob.core.windows.net/sampleframes/ams-video-frame-3-02-24.PNG";
-
- // Add the frames from 17, 64, and 144 seconds.
- AddFrame(client, review_id, frame1_url, "17");
- AddFrame(client, review_id, frame2_url, "64");
- AddFrame(client, review_id, frame3_url, "144");
-
- // Get frames information and show
- GetFrames(client, review_id);
- GetReview(client, review_id);
-
- // Publish the review
- PublishReview(client, review_id);
-
- Console.WriteLine("Open your Content Moderator Dashboard and select Review > Video to see the review.");
- Console.WriteLine("Press any key to close the application.");
- Console.ReadKey();
- }
-}
-```
-
-## Run the program and review the output
-When you run the application, you see an output on the following lines:
-
-```json
-Creating a video review.
-Adding a frame to the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-Adding a frame to the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-Adding a frame to the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-Getting frames for the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-{
- "ReviewId": "201801v3212bda70ced4928b2cd7459c290c7dc",
- "VideoFrames": [
- {
- "Timestamp": "17000",
- "FrameImage": "https://reviewcontentprod.blob.core.windows.net/testreview6/FRM_201801v3212bda70ced4928b2cd7459c290c7dc_17000.PNG",
- "Metadata": [
- {
- "Key": "reviewRecommended",
- "Value": "true"
- },
- {
- "Key": "adultScore",
- "Value": "0.808312381528463"
- },
- {
- "Key": "a",
- "Value": "false"
- },
- {
- "Key": "racyScore",
- "Value": "0.846378884206702"
- },
- {
- "Key": "r",
- "Value": "false"
- }
- ],
- "ReviewerResultTags": [
- {
- "Key": "tag1",
- "Value": "value1"
- }
- ]
- },
- {
- "Timestamp": "64000",
- "FrameImage": "https://reviewcontentprod.blob.core.windows.net/testreview6/FRM_201801v3212bda70ced4928b2cd7459c290c7dc_64000.PNG",
- "Metadata": [
- {
- "Key": "reviewRecommended",
- "Value": "true"
- },
- {
- "Key": "adultScore",
- "Value": "0.576078300166912"
- },
- {
- "Key": "a",
- "Value": "false"
- },
- {
- "Key": "racyScore",
- "Value": "0.244768953064815"
- },
- {
- "Key": "r",
- "Value": "false"
- }
- ],
- "ReviewerResultTags": [
- {
- "Key": "tag1",
- "Value": "value1"
- }
- ]
- },
- {
- "Timestamp": "144000",
- "FrameImage": "https://reviewcontentprod.blob.core.windows.net/testreview6/FRM_201801v3212bda70ced4928b2cd7459c290c7dc_144000.PNG",
- "Metadata": [
- {
- "Key": "reviewRecommended",
- "Value": "true"
- },
- {
- "Key": "adultScore",
- "Value": "0.664480847150311"
- },
- {
- "Key": "a",
- "Value": "false"
- },
- {
- "Key": "racyScore",
- "Value": "0.933817870418456"
- },
- {
- "Key": "r",
- "Value": "false"
- }
- ],
- "ReviewerResultTags": [
- {
- "Key": "tag1",
- "Value": "value1"
- }
- ]
- }
- ]
-}
-
-Getting the status for the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-{
- "ReviewId": "201801v3212bda70ced4928b2cd7459c290c7dc",
- "SubTeam": "public",
- "Status": "UnPublished",
- "ReviewerResultTags": [],
- "CreatedBy": "testreview6",
- "Metadata": [
- {
- "Key": "FrameCount",
- "Value": "3"
- }
- ],
- "Type": "Video",
- "Content": "https://amssamples.streaming.mediaservices.windows.net/91492735-c523-432b-ba01-faba6c2206a2/AzureMediaServicesPromo.ism/manifest",
- "ContentId": "review1",
- "CallbackEndpoint": null
-}
-
-Publishing the review with ID 201801v3212bda70ced4928b2cd7459c290c7dc.
-Open your Content Moderator Dashboard and select Review > Video to see the review.
-Press any key to close the application.
-```
-
-## Check out your video review
-
-Finally, you see the video review in your Content Moderator review tool account on the **Review**>**Video** screen.
-
-![Video review for human moderators](images/ams-video-review.PNG)
-
-## Next steps
-
-Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET.
cognitive-services Video Transcript Reviews Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/video-transcript-reviews-quickstart-dotnet.md
- Title: Create video transcript reviews using .NET - Content Moderator-
-description: Learn how to create video transcript reviews using the Azure Cognitive Services Content Moderator SDK for .NET.
------- Previously updated : 10/24/2019-----
-# Create video transcript reviews using .NET
--
-This article provides information and code samples to help you quickly get started using the [Content Moderator SDK with C#](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) to:
--- Create a video review for human moderators-- Add a moderated transcript to the review-- Publish the review-
-## Prerequisites
--- Sign in or create an account on the Content Moderator [Review tool](https://contentmoderator.cognitive.microsoft.com/) site if you haven't done so already.-- This article assumes that you have [moderated the video](video-moderation-api.md) and [created the video review](video-reviews-quickstart-dotnet.md) in the review tool for human decision making. You now want to add moderated video transcripts in the review tool.-
-## Ensure your API key can call the review API (Job creation)
-
-After completing the previous steps, you may end up with two Content Moderator keys if you started from the Azure portal.
-
-If you plan to use the Azure-provided API key in your SDK sample, follow the steps mentioned in the [Using Azure key with the review API](./review-tool-user-guide/configure.md#use-your-azure-account-with-the-review-apis) section to allow your application to call the review API and create reviews.
-
-If you use the free trial key generated by the review tool, your review tool account already knows about the key and therefore, no additional steps are required.
-
-## Prepare your video for review
-
-Add the transcript to a video review. The video must be published online. You need its streaming endpoint. The streaming endpoint allows the review tool video player to play the video.
-
-![Video demo thumbnail](images/ams-video-demo-view.PNG)
--- Copy the **URL** on this [Azure Media Services demo](https://aka.ms/azuremediaplayer?url=https%3A%2F%2Famssamples.streaming.mediaservices.windows.net%2F91492735-c523-432b-ba01-faba6c2206a2%2FAzureMediaServicesPromo.ism%2Fmanifest) page for the manifest URL.-
-## Create your Visual Studio project
-
-1. Add a new **Console app (.NET Framework)** project to your solution.
-
-1. Name the project **VideoTranscriptReviews**.
-
-1. Select this project as the single startup project for the solution.
-
-### Install required packages
-
-Install the following NuGet packages for the TermLists project.
--- Microsoft.Azure.CognitiveServices.ContentModerator-- Microsoft.Rest.ClientRuntime-- Microsoft.Rest.ClientRuntime.Azure-- Newtonsoft.Json-
-### Update the program's using statements
-
-Modify the program's using statements as follows.
--
-```csharp
-using System;
-using System.Collections.Generic;
-using System.IO;
-using System.Threading;
-using Microsoft.Azure.CognitiveServices.ContentModerator;
-using Microsoft.Azure.CognitiveServices.ContentModerator.Models;
-using Newtonsoft.Json;
-```
-
-### Add private properties
-
-Add the following private properties to namespace **VideoTranscriptReviews**, class **Program**. Update the `AzureEndpoint` and `CMSubscriptionKey` fields with the values of your endpoint URL and subscription key. You can find these in the **Quick start** tab of your resource in the Azure portal.
-
-```csharp
-namespace VideoReviews
-{
- class Program
- {
- // NOTE: Enter a valid endpoint URL
- /// <summary>
- /// The endpoint URL of your subscription
- /// </summary>
- private static readonly string AzureEndpoint = "YOUR ENDPOINT URL";
-
- // NOTE: Enter a valid subscription key.
- /// <summary>
- /// Your Content Moderator subscription key.
- /// </summary>
- private static readonly string CMSubscriptionKey = "YOUR CONTENT MODERATOR KEY";
-
- // NOTE: Replace this example team name with your Content Moderator team name.
- /// <summary>
- /// The name of the team to assign the job to.
- /// </summary>
- /// <remarks>This must be the team name you used to create your
- /// Content Moderator account. You can retrieve your team name from
- /// the Content Moderator web site. Your team name is the Id associated
- /// with your subscription.</remarks>
- private const string TeamName = "YOUR CONTENT MODERATOR TEAM ID";
-
- /// <summary>
- /// The minimum amount of time, in milliseconds, to wait between calls
- /// to the Content Moderator APIs.
- /// </summary>
- private const int throttleRate = 2000;
-```
-
-### Create Content Moderator Client object
-
-Add the following method definition to namespace VideoTranscriptReviews, class Program.
-
-```csharp
-/// <summary>
-/// Returns a new Content Moderator client for your subscription.
-/// </summary>
-/// <returns>The new client.</returns>
-/// <remarks>The <see cref="ContentModeratorClient"/> is disposable.
-/// When you have finished using the client,
-/// you should dispose of it either directly or indirectly. </remarks>
-public static ContentModeratorClient NewClient()
-{
- return new ContentModeratorClient(new ApiKeyServiceClientCredentials(CMSubscriptionKey))
- {
- Endpoint = AzureEndpoint
- };
-}
-```
-
-## Create a video review
-
-Create a video review with **ContentModeratorClient.Reviews.CreateVideoReviews**. For more information, see the [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4).
-
-**CreateVideoReviews** has the following required parameters:
-1. A string that contains a MIME type, which should be "application/json."
-1. Your Content Moderator team name.
-1. An **IList\<CreateVideoReviewsBodyItem>** object. Each **CreateVideoReviewsBodyItem** object represents a video review. This quickstart creates one review at a time.
-
-**CreateVideoReviewsBodyItem** has several properties. At a minimum, you set the following properties:
-- **Content**. The URL of the video to be reviewed.-- **ContentId**. An ID to assign to the video review.-- **Status**. Set the value to "Unpublished." If you do not set it, it defaults to "Pending", which means the video review is published and pending human review. Once a video review is published, you can no longer add video frames, a transcript, or a transcript moderation result to it.-
-> [!NOTE]
-> **CreateVideoReviews** returns an IList\<string>. Each of these strings contains an ID for a video review. These IDs are GUIDs and are not the same as the value of the **ContentId** property.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Create a video review. For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/580519483f9b0709fc47f9c4
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="id">The ID to assign to the video review.</param>
-/// <param name="content">The URL of the video to review.</param>
-/// <returns>The ID of the video review.</returns>
-private static string CreateReview(ContentModeratorClient client, string id, string content)
-{
- Console.WriteLine("Creating a video review.");
-
- List<CreateVideoReviewsBodyItem> body = new List<CreateVideoReviewsBodyItem>() {
- new CreateVideoReviewsBodyItem
- {
- Content = content,
- ContentId = id,
- /* Note: to create a published review, set the Status to "Pending".
- However, you cannot add video frames or a transcript to a published review. */
- Status = "Unpublished",
- }
- };
-
- var result = client.Reviews.CreateVideoReviews("application/json", TeamName, body);
-
- Thread.Sleep(throttleRate);
-
- // We created only one review.
- return result[0];
-}
-```
-
-> [!NOTE]
-> Your Content Moderator service key has a requests per second (RPS) rate limit. If you exceed the limit, the SDK throws an exception with a 429 error code.
->
-> A free tier key has a one RPS rate limit.
-
-## Add transcript to video review
-
-You add a transcript to a video review with **ContentModeratorClient.Reviews.AddVideoTranscript**. **AddVideoTranscript** has the following required parameters:
-1. Your Content Moderator team ID.
-1. The video review ID returned by **CreateVideoReviews**.
-1. A **Stream** object that contains the transcript.
-
-The transcript must be in the WebVTT format. For more information, see [WebVTT: The Web Video Text Tracks Format](https://www.w3.org/TR/webvtt1/).
-
-> [!NOTE]
-> The program uses a sample transcript in the VTT format. In a real-world solution, you use the Azure Media Indexer service to [generate a transcript](../../media-services/previous/media-services-index-content.md) from a video.
-
-Add the following method definition to namespace VideotranscriptReviews, class Program.
-
-```csharp
-/// <summary>
-/// Add a transcript to the indicated video review.
-/// The transcript must be in the WebVTT format.
-/// For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b8b2e7151f0b10d451fe
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-/// <param name="transcript">The video transcript.</param>
-static void AddTranscript(ContentModeratorClient client, string review_id, string transcript)
-{
- Console.WriteLine("Adding a transcript to the review with ID {0}.", review_id);
- client.Reviews.AddVideoTranscript(TeamName, review_id, new MemoryStream(System.Text.Encoding.UTF8.GetBytes(transcript)));
- Thread.Sleep(throttleRate);
-}
-```
-
-## Add a transcript moderation result to video review
-
-In addition to adding a transcript to a video review, you also add the result of moderating that transcript. You do so with **ContentModeratorClient.Reviews.AddVideoTranscriptModerationResult**. For more information, see the [API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b93ce7151f0b10d451ff).
-
-**AddVideoTranscriptModerationResult** has the following required parameters:
-1. A string that contains a MIME type, which should be "application/json."
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-1. An IList\<TranscriptModerationBodyItem>. A **TranscriptModerationBodyItem** has the following properties:
-1. **Terms**. An IList\<TranscriptModerationBodyItemTermsItem>. A **TranscriptModerationBodyItemTermsItem** has the following properties:
-1. **Index**. The zero-based index of the term.
-1. **Term**. A string that contains the term.
-1. **Timestamp**. A string that contains, in seconds, the time in the transcript at which the terms are found.
-
-The transcript must be in the WebVTT format. For more information, see [WebVTT: The Web Video Text Tracks Format](https://www.w3.org/TR/webvtt1/).
-
-Add the following method definition to namespace VideoTranscriptReviews, class Program. This method submits a transcript to the **ContentModeratorClient.TextModeration.ScreenText** method. It also translates the result into an IList\<TranscriptModerationBodyItem>, and submits to **AddVideoTranscriptModerationResult**.
-
-```csharp
-/// <summary>
-/// Add the results of moderating a video transcript to the indicated video review.
-/// For more information, see the API reference:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7b93ce7151f0b10d451ff
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-/// <param name="transcript">The video transcript.</param>
-static void AddTranscriptModerationResult(ContentModeratorClient client, string review_id, string transcript)
-{
- Console.WriteLine("Adding a transcript moderation result to the review with ID {0}.", review_id);
-
- // Screen the transcript using the Text Moderation API. For more information, see:
- // https://westus2.dev.cognitive.microsoft.com/docs/services/57cf753a3f9b070c105bd2c1/operations/57cf753a3f9b070868a1f66f
- Screen screen = client.TextModeration.ScreenText("eng", "text/plain", transcript);
-
- // Map the term list returned by ScreenText into a term list we can pass to AddVideoTranscriptModerationResult.
- List<TranscriptModerationBodyItemTermsItem> terms = new List<TranscriptModerationBodyItemTermsItem>();
- if (null != screen.Terms)
- {
- foreach (var term in screen.Terms)
- {
- if (term.Index.HasValue)
- {
- terms.Add(new TranscriptModerationBodyItemTermsItem(term.Index.Value, term.Term));
- }
- }
- }
-
- List<TranscriptModerationBodyItem> body = new List<TranscriptModerationBodyItem>()
- {
- new TranscriptModerationBodyItem()
- {
- Timestamp = "0",
- Terms = terms
- }
- };
-
- client.Reviews.AddVideoTranscriptModerationResult("application/json", TeamName, review_id, body);
-
- Thread.Sleep(throttleRate);
-}
-```
-
-## Publish video review
-
-You publish a video review with **ContentModeratorClient.Reviews.PublishVideoReview**. **PublishVideoReview** has the following required parameters:
-1. Your Content Moderator team name.
-1. The video review ID returned by **CreateVideoReviews**.
-
-Add the following method definition to namespace VideoReviews, class Program.
-
-```csharp
-/// <summary>
-/// Publish the indicated video review. For more information, see the reference API:
-/// https://westus2.dev.cognitive.microsoft.com/docs/services/580519463f9b070e5c591178/operations/59e7bb29e7151f0b10d45201
-/// </summary>
-/// <param name="client">The Content Moderator client.</param>
-/// <param name="review_id">The video review ID.</param>
-private static void PublishReview(ContentModeratorClient client, string review_id)
-{
- Console.WriteLine("Publishing the review with ID {0}.", review_id);
- client.Reviews.PublishVideoReview(TeamName, review_id);
- Thread.Sleep(throttleRate);
-}
-```
-
-## Putting it all together
-
-Add the **Main** method definition to namespace VideoTranscriptReviews, class Program. Finally, close the Program class and the VideoTranscriptReviews namespace.
-
-> [!NOTE]
-> The program uses a sample transcript in the VTT format. In a real-world solution, you use the Azure Media Indexer service to [generate a transcript](../../media-services/previous/media-services-index-content.md) from a video.
-
-```csharp
-static void Main(string[] args)
-{
- using (ContentModeratorClient client = NewClient())
- {
- // Create a review with the content pointing to a streaming endpoint (manifest)
- var streamingcontent = "https://amssamples.streaming.mediaservices.windows.net/91492735-c523-432b-ba01-faba6c2206a2/AzureMediaServicesPromo.ism/manifest";
- string review_id = CreateReview(client, "review1", streamingcontent);
-
- var transcript = @"WEBVTT
-
- 01:01.000 --> 02:02.000
- First line with a negative word in a transcript.
-
- 02:03.000 --> 02:25.000
- This is another line in the transcript.
- ";
-
- AddTranscript(client, review_id, transcript);
-
- AddTranscriptModerationResult(client, review_id, transcript);
-
- // Publish the review
- PublishReview(client, review_id);
-
- Console.WriteLine("Open your Content Moderator Dashboard and select Review > Video to see the review.");
- Console.WriteLine("Press any key to close the application.");
- Console.ReadKey();
- }
-}
-```
-
-## Run the program and review the output
-
-When you run the application, you see an output on the following lines:
-
-```console
-Creating a video review.
-Adding a transcript to the review with ID 201801v5b08eefa0d2d4d64a1942aec7f5cacc3.
-Adding a transcript moderation result to the review with ID 201801v5b08eefa0d2d4d64a1942aec7f5cacc3.
-Publishing the review with ID 201801v5b08eefa0d2d4d64a1942aec7f5cacc3.
-Open your Content Moderator Dashboard and select Review > Video to see the review.
-Press any key to close the application.
-```
-
-## Navigate to your video transcript review
-
-Go to the video transcript review in your Content Moderator review tool on the **Review**>**Video**>**Transcript** screen.
-
-You see the following features:
-- The two lines of transcript you added-- The profanity term found and highlighted by the text moderation service-- Selecting a transcription text starts the video from that timestamp-
-![Video transcript review for human moderators](images/ams-video-transcript-review.PNG)
-
-## Next steps
-
-Get the [Content Moderator .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.ContentModerator/) and the [Visual Studio solution](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/ContentModerator) for this and other Content Moderator quickstarts for .NET.
-
-Learn how to generate [video reviews](video-reviews-quickstart-dotnet.md) in the review tool.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/whats-new.md
- Title: What's new in Content Moderator?-
-description: This article contains news about Content Moderator.
------ Previously updated : 09/28/2021---
-# What's new in Content Moderator
--
-Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
-
-## August 2020
-
-### New video moderation UI
-
-* The Review tool's video moderation page has been updated with many new tools and features to facilitate tagging and processing videos and parts of videos. See the [Video moderation guide](./video-moderation-human-review.md) to learn more.
-
-### New dashboard
-
-* The Review tool dashboard has been updated to show more information about open, pending, and completed reviews. See the [Review tool guide](./Review-Tool-User-Guide/human-in-the-loop.md) for a tour of the new features.
-
-## Cognitive Services updates
-
-[Azure update announcements for Cognitive Services](https://azure.microsoft.com/updates/?product=cognitive-services)
cognitive-services Data Sources And Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/data-sources-and-content.md
Title: Data sources and content types - QnA Maker description: Learn how to import question and answer pairs from data sources and supported content types, which include many standard structured documents such as PDF, DOCX, and TXT - QnA Maker. ++ Previously updated : 10/13/2020 Last updated : 01/11/2022 # Importing from data sources A knowledge base consists of question and answer pairs brought in by public URLs and files. + ## Data source locations Content is brought into a knowledge base from a data source. Data source locations are **public URLs or files**, which do not require authentication.
cognitive-services Improve Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/improve-knowledge-base.md
description: Improve the quality of your knowledge base with active learning. Re
Previously updated : 04/06/2020 Last updated : 01/11/2022++ ms.devlang: csharp, javascript
<a name="accept-an-active-learning-suggestion-in-the-knowledge-base"></a>
-Active Learning alters the Knowledge Base or Search Service after you approve the suggestion, then save and train. If you approve the suggestion it will be added as an alternate question.
+Active Learning alters the Knowledge Base or Search Service after you approve the suggestion, then save and train. If you approve the suggestion, it will be added as an alternate question.
## Turn on active learning
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
The following sections list changes in the most recent releases.
## Speech Devices SDK 0.5.0: 2018-Aug release - Improved the accuracy of speech recognition by fixing a bug in the audio processing code.-- Updated the [Speech SDK](./speech-sdk.md) component to version 0.5.0. For more information, see its
- [release notes](releasenotes.md#cognitive-services-speech-sdk-050-2018-july-release).
+- Updated the [Speech SDK](./speech-sdk.md) component to version 0.5.0.
## Speech Devices SDK 0.2.12733: 2018-May release
cognitive-services How To Lower Speech Synthesis Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-lower-speech-synthesis-latency.md
See [How to configure OpenSSL for Linux](how-to-configure-openssl-linux.md#certi
### Use latest Speech SDK
-We keep improving the Speech SDK's performance, so try to use the latest Speech SDK in your application.
-For example, we fixed a `TCP_NODELAY` setting issue in [1.16.0](releasenotes.md#speech-sdk-1160-2021-march-release), which reduced extra one route trip time.
+We keep improving the Speech SDK's performance, so try to use the latest Speech SDK in your application.
## Load test guideline
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
Previously updated : 05/15/2021- Last updated : 01/11/2022+ # Speech Service release notes
+See below for information about changes to Speech services and resources.
-## OnPrem Speech 2022-Jan release
+## What's new?
-### Speech-to-text Container v2.18.0
-- Regular monthly updates (including security upgrades and vulnerability fixes).
+* Speaker Recognition service is generally available (GA). With [Speaker Recognition](./speaker-recognition-overview.md) you can accurately verify and identify speakers by their unique voice characteristics.
+* Speech SDK 1.19.0 release including Speaker Recognition support, Mac M1 ARM support, OpenSSL linking in Linux is dynamic, and Ubuntu 16.04 is no longer supported.
+* Custom Neural Voice extended to support [49 locales](./language-support.md#custom-neural-voice).
+* Commitment Tiers added to [pricing options](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-### Neural-text-to-speech Container v1.12.0
-- Support new locale-voice `am-et-amehaneural` and `am-et-mekdesneural` and `so-so-muuseneural` and `so-so-ubaxneural`.-- Regular monthly updates (including security upgrades and vulnerability fixes).
+## Release notes
-## Speech SDK 1.19.0: 2021-Nov release
+**Choose a service or resource**
-
+# [SDK](#tab/speech-sdk)
-**Note**: Get started with the Speech SDK [here](speech-sdk.md#get-the-speech-sdk). The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-#### Highlights
+# [CLI](#tab/speech-cli)
-- Speaker Recognition service is generally available (GA) now. Speech SDK APIs are available on C++, C#, Java, and JavaScript. With Speaker recognition you can accurately verify and identify speakers by their unique voice characteristics. See the [documentation](speaker-recognition-overview.md) for more details. -- We have dropped support for Ubuntu 16.04 in conjunction with Azure DevOps and GitHub. Ubuntu 16.04 reached end of life back in April of 2021. Please migrate Ubuntu 16.04 workflows to Ubuntu 18.04 or newer.ΓÇ»
+# [Text-to-speech](#tab/text-to-speech)
-- OpenSSL linking in Linux binaries changed to dynamic. Linux binary size has been reduced by about 50%. -- Mac M1 ARM based silicon support added.
+# [Speech-to-text](#tab/speech-to-text)
-
-#### New features
+# [Containers](#tab/containers)
-- **C++/C#/Java**: New APIs added to enable audio processing support for speech input with Microsoft Audio Stack. Documentation [here](audio-processing-overview.md). -- **C++**: New APIs for intent recognition to facilitate more advanced pattern matching. This includes List and Prebuilt Integer entities as well as support for grouping intents and entities as models (Documentation, updates, and samples are under development and will be published in the near future). --- **Mac**: Support for ARM64 (M1) based silicon for Cocoapod, Python, Java, and NuGet packages related to [GitHub issue 1244](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1244).--- **iOS/Mac**: iOS and macOS binaries are now packaged into xcframework related to [GitHub issue 919](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/919).--- **iOS/Mac**: Support for Mac catalyst related to [GitHub issue 1171](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1171). --- **Linux**: New tar package added for CentOS7 [About the Speech SDK](speech-sdk.md).--- **JavaScript**: VoiceProfile & SpeakerRecognizer APIs made async/awaitable. --- **JavaScript**: Support added for US government Azure regions. --- **Windows**: Support added for playback on Universal Windows Platform (UWP). -
-
-
-#### Bug fixes
--- **Android**: OpenSSL security update (updated to version 1.1.1l) for Android packages. --- **Python**: Resolved bug where selecting speaker device on Python fails. --- **Core**: Automatically reconnect when a connection attempt fails. --- **iOS**: Audio compression disabled on iOS packages due instability and bitcode build problems when using GStreamer. Details are available via [GitHub issue 1209](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1209).-
-
-
-#### Samples [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk)
--- **Mac/iOS**: Updated samples and quickstarts to use xcframework package. --- **.NET**: Samples updated to use .NET core 3.1 version. --- **JavaScript**: Added sample for voice assistants. -
-
-## Text-to-Speech 2021-October release
-**New languages and voices added for Neural TTS**
--- **49 new languages and variants introduced** - 98 new voices in 49 new locales are added into the neural TTS languages list: Adri in `af-ZA` Afrikaans (South Africa), Willem in `af-ZA` Afrikaans (South Africa), Mekdes in `am-ET` Amharic (Ethiopia), Ameha in `am-ET` Amharic (Ethiopia), Fatima in `ar-AE` Arabic (United Arab Emirates), Hamdan in `ar-AE` Arabic (United Arab Emirates), Laila in `ar-BH` Arabic (Bahrain), Ali in `ar-BH` Arabic (Bahrain), Amina in `ar-DZ` Arabic (Algeria), Ismael in `ar-DZ` Arabic (Algeria), Rana in `ar-IQ` Arabic (Iraq), Bassel in `ar-IQ` Arabic (Iraq), Sana  in `ar-JO` Arabic (Jordan), Taim  in `ar-JO` Arabic (Jordan), Noura in `ar-KW` Arabic (Kuwait), Fahed in `ar-KW` Arabic (Kuwait), Iman in `ar-LY` Arabic (Libya), Omar in `ar-LY` Arabic (Libya), Mouna in `ar-MA` Arabic (Morocco), Jamal in `ar-MA` Arabic (Morocco), Amal in `ar-QA` Arabic (Qatar), Moaz in `ar-QA` Arabic (Qatar), Amany in `ar-SY` Arabic (Syria), Laith in `ar-SY` Arabic (Syria), Reem in `ar-TN` Arabic (Tunisia), Hedi in `ar-TN` Arabic (Tunisia), Maryam in `ar-YE` Arabic (Yemen), Saleh in `ar-YE` Arabic (Yemen), Nabanita in `bn-BD` Bangla (Bangladesh), Pradeep in `bn-BD` Bangla (Bangladesh), Asilia in `en-KE` English (Kenya), Chilemba in `en-KE` English (Kenya), Ezinne in `en-NG` English (Nigeria), Abeo in `en-NG` English (Nigeria), Imani in `en-TZ` English (Tanzania), Elimu in `en-TZ` English (Tanzania), Sofia in `es-BO` Spanish (Bolivia), Marcelo in `es-BO` Spanish (Bolivia), Catalina in `es-CL` Spanish (Chile), Lorenzo in `es-CL` Spanish (Chile), Maria in `es-CR` Spanish (Costa Rica), Juan in `es-CR` Spanish (Costa Rica), Belkys in `es-CU` Spanish (Cuba), Manuel in `es-CU` Spanish (Cuba), Ramona in `es-DO` Spanish (Dominican Republic), Emilio in `es-DO` Spanish (Dominican Republic), Andrea in `es-EC` Spanish (Ecuador), Luis in `es-EC` Spanish (Ecuador), Teresa in `es-GQ` Spanish (Equatorial Guinea), Javier in `es-GQ` Spanish (Equatorial Guinea), Marta in `es-GT` Spanish (Guatemala), Andres in `es-GT` Spanish (Guatemala), Karla in `es-HN` Spanish (Honduras), Carlos in `es-HN` Spanish (Honduras), Yolanda in `es-NI` Spanish (Nicaragua), Federico in `es-NI` Spanish (Nicaragua), Margarita in `es-PA` Spanish (Panama), Roberto in `es-PA` Spanish (Panama), Camila in `es-PE` Spanish (Peru), Alex in `es-PE` Spanish (Peru), Karina  in `es-PR` Spanish (Puerto Rico), Victor  in `es-PR` Spanish (Puerto Rico), Tania in `es-PY` Spanish (Paraguay), Mario in `es-PY` Spanish (Paraguay), Lorena in `es-SV` Spanish (El Salvador), Rodrigo in `es-SV` Spanish (El Salvador), Valentina in `es-UY` Spanish (Uruguay), Mateo in `es-UY` Spanish (Uruguay), Paola in `es-VE` Spanish (Venezuela), Sebastian in `es-VE` Spanish (Venezuela), Dilara in `fa-IR` Persian (Iran), Farid in `fa-IR` Persian (Iran), Blessica in `fil-PH` Filipino (Philippines), Angelo in `fil-PH` Filipino (Philippines), Sabela in `gl-ES` Galician (Spain), Roi in `gl-ES` Galician (Spain), Siti in `jv-ID` Javanese (Indonesia), Dimas in `jv-ID` Javanese (Indonesia), Sreymom in `km-KH` Khmer (Cambodia), Piseth in `km-KH` Khmer (Cambodia), Nilar in `my-MM` Burmese (Myanmar), Thiha in `my-MM` Burmese (Myanmar), Ubax in `so-SO` Somali (Somalia), Muuse in `so-SO` Somali (Somalia), Tuti in `su-ID` Sundanese (Indonesia), Jajang in `su-ID` Sundanese (Indonesia), Rehema in `sw-TZ` Swahili (Tanzania), Daudi in `sw-TZ` Swahili (Tanzania), Saranya in `ta-LK` Tamil (Sri Lanka), Kumar in `ta-LK` Tamil (Sri Lanka), Venba in `ta-SG` Tamil (Singapore), Anbu in `ta-SG` Tamil (Singapore), Gul in `ur-IN` Urdu (India), Salman in `ur-IN` Urdu (India), Madina in `uz-UZ` Uzbek (Uzbekistan), Sardor in `uz-UZ` Uzbek (Uzbekistan), Thando in `zu-ZA` Zulu (South Africa), Themba in `zu-ZA` Zulu (South Africa).-
-## Text-to-Speech 2021-September release
-- **New chatbot voice in `en-US` English (US)**: Sara, represents a young female adult that talks more casually and fits best for the chatbot scenarios. -- **New styles added for `ja-JP` Japanese voice Nanami**: Three new styles are now available with Nanami: chat, customer service, and cheerful.-- **Overall pronunciation improvement**: Ardi in `id-ID`, Premwadee in `th-TH`, Christel in `da-DK`, HoaiMy and NamMinh in `vi-VN`.-- **Two new voices in `zh-CN` Chinese (Mandarin, China) in preview**: Xiaochen & Xiaoyan, optimized for spontaneous speech and customer service scenarios.-
-## Text-to-Speech 2021-July release
-
-**Neural TTS updates**
-- Reduced pronunciation errors in Hebrew by 20%.-
-**Speech Studio updates**
-- **Custom Neural Voice**: Updated the training pipeline to UniTTSv3 with which the model quality is improved while training time is reduced by 50% for acoustic models. -- **Audio Content Creation**: Fixed the "Export" performance issue and the bug on custom neural voice selection. -
-## Speech SDK 1.18.0: 2021-July release
-
-**Note**: Get started with the Speech SDK [here](speech-sdk.md#get-the-speech-sdk).
-
-**Highlights summary**
-- Ubuntu 16.04 reached end of life in April of 2021. In conjunction with Azure DevOps and GitHub, we will drop support for 16.04 in September 2021. Please migrate ubuntu-16.04 workflows to ubuntu-18.04 or newer before then. -
-#### New features
--- **C++**: Simple Language Pattern matching with the Intent Recognizer now makes it easier to [implement simple intent recognition scenarios](./get-started-intent-recognition.md?pivots=programming-language-cpp).-- **C++/C#/Java**: We added a new API, `GetActivationPhrasesAsync()` to the `VoiceProfileClient` class for receiving a list of valid activation phrases in speaker recognition enrollment phase for independent recognition scenarios.
- - **Important**: The Speaker Recognition feature is in Preview. All voice profiles created in Preview will be discontinued 90 days after the Speaker Recognition feature is moved out of Preview into General Availability. At that point the Preview voice profiles will stop functioning.
-- **Python**: Added [support for continuous Language Identification (LID)](./how-to-automatic-language-detection.md?pivots=programming-language-python) on the existing `SpeechRecognizer` and `TranslationRecognizer` objects. -- **Python**: Added a [new Python object](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.sourcelanguagerecognizer) named `SourceLanguageRecognizer` to do one-time or continuous LID (without recognition or translation). -- **JavaScript**: `getActivationPhrasesAsync` API added to `VoiceProfileClient` class for receiving a list of valid activation phrases in speaker recognition enrollment phase for independent recognition scenarios. -- **JavaScript** `VoiceProfileClient`'s `enrollProfileAsync` API is now async awaitable. See [this independent identification code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/node/speaker-recognition/identification/independent-identification.js) for example usage.-
-#### Improvements
--- **Java**: **AutoCloseable** support added to many Java objects. Now the try-with-resources model is supported to release resources. See [this sample that uses try-with-resources](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java#L28). Also see the Oracle Java documentation tutorial for [The try-with-resources Statement](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) to learn about this pattern.-- **Disk footprint** has been significantly reduced for many platforms and architectures. Examples for the `Microsoft.CognitiveServices.Speech.core` binary: x64 Linux is 475KB smaller (8.0% reduction); ARM64 Windows UWP is 464KB smaller (11.5% reduction); x86 Windows is 343KB smaller (17.5% reduction); and x64 Windows is 451KB smaller (19.4% reduction).-
-#### Bug fixes
--- **Java**: Fixed synthesis error when the synthesis text contains surrogate characters. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1118). -- **JavaScript**: Browser microphone audio processing now uses `AudioWorkletNode` instead of deprecated `ScriptProcessorNode`. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/391).-- **JavaScript**: Correctly keep conversations alive during long running conversation translation scenarios. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/389).-- **JavaScript**: Fixed issue with recognizer reconnecting to a mediastream in continuous recognition. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/385).-- **JavaScript**: Fixed issue with recognizer reconnecting to a pushStream in continuous recognition. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/pull/399).-- **JavaScript**: Corrected word level offset calculation in detailed recognition results. Details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/394).-
-#### Samples
--- Java quickstart samples updated [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java).-- JavaScript speaker recognition samples updated to show new usage of `enrollProfileAsync()`. See samples [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/node).-
-## Text-to-Speech 2021-June release
-
-**Speech Studio updates**
--- **Custom Neural Voice**: Custom Neural Voice training extended to support South East Asia. New features released to support data uploading status checking. -- **Audio Content Creation**: Released a new feature to support custom lexicon. With this feature, users can easily create their lexicon files and define the customized pronunciation for their audio output. -
-## Text-to-Speech 2021-May release
-
-**New languages and voices added for neural TTS**
--- **Ten new languages introduced** - 20 new voices in 10 new locales are added into the neural TTS language list: Yan in `en-HK` English (Hongkong), Sam in `en-HK` English (Hongkong), Molly in `en-NZ` English (New Zealand), Mitchell in `en-NZ` English (New Zealand), Luna in `en-SG` English (Singapore), Wayne in `en-SG` English (Singapore), Leah in `en-ZA` English (South Africa), Luke in `en-ZA` English (South Africa), Dhwani in `gu-IN` Gujarati (India), Niranjan in `gu-IN` Gujarati (India), Aarohi in `mr-IN` Marathi (India), Manohar in `mr-IN` Marathi (India), Elena in `es-AR` Spanish (Argentina), Tomas in `es-AR` Spanish (Argentina), Salome in `es-CO` Spanish (Colombia), Gonzalo in `es-CO` Spanish (Colombia), Paloma in `es-US` Spanish (US), Alonso in `es-US` Spanish (US), Zuri in `sw-KE` Swahili (Kenya), Rafiki in `sw-KE` Swahili (Kenya).--- **Eleven new en-US voices in preview** - 11 new en-US voices in preview are added to American English, they are Ashley, Amber, Ana, Brandon, Christopher, Cora, Elizabeth, Eric, Michelle, Monica, Jacob.--- **Five `zh-CN` Chinese (Mandarin, Simplified) voices are generally available** - 5 Chinese (Mandarin, Simplified) voices are changed from preview to generally available. They are Yunxi, Xiaomo, Xiaoman, Xiaoxuan, Xiaorui. Now, these voices are available in all [regions](regions.md#prebuilt-neural-voices). Yunxi is added with a new 'assistant' style, which is suitable for chat bot and voice agent. Xiaomo's voice styles are refined to be more natural and featured.-
-## Speech SDK 1.17.0: 2021-May release
-
->[!NOTE]
->Get started with the Speech SDK [here](speech-sdk.md#get-the-speech-sdk).
-
-**Highlights summary**
--- Smaller footprint - we continue to decrease the memory and disk footprint of the Speech SDK and its components.-- A new stand alone language identification API allows you to recognize what language is being spoken.-- Develop speech enabled mixed reality and gaming applications using Unity on macOS.-- You can now use Text-to-Speech in addition to speech recognition from the Go programming language.-- Several Bug fixes to address issues YOU, our valued customers, have flagged on GitHub! THANK YOU! Keep the feedback coming!-
-#### New features
--- **C++/C#**: New stand-alone Single-Shot/At-Start and Continuous Language Detection via the `SourceLanguageRecognizer` API. If you only want to detect the language(s) spoken in audio content, this is the API to do that. See details for [C++](/cpp/cognitive-services/speech/sourcelanguagerecognizer) and [C#](/dotnet/api/microsoft.cognitiveservices.speech.sourcelanguagerecognizer).-- **C++/C#**: Speech Recognition and Translation Recognition now support both at-start and continuous Language Identification so you can programmatically determine which language(s) are being spoken before they are transcribed or translated. See documentation [here for Speech Recognition](how-to-automatic-language-detection.md) and [here for Speech Translation](get-started-speech-translation.md).-- **C#**: Added support Unity support to macOS (x64). This unlocks speech recognition and speech synthesis use cases in mixed reality and gaming!-- **Go**: We added support for speech synthesis/Text-to-Speech to the Go programming language to make speech synthesis available in even more use cases. See our [quickstart](get-started-text-to-speech.md?tabs=windowsinstall&pivots=programming-language-go) or our [reference documentation](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go).-- **C++/C#/Jav).-- **C++/C#/Java/Python/Objective-C/Go**: We now expose the latency and underrun time in `SpeechSynthesisResult` to help you monitor and diagnose speech synthesis latency issues. See details for [C++](/cpp/cognitive-services/speech/speechsynthesisresult), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisresult), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisresult), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisresult), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesisresult) and [Go](https://pkg.go.dev/github.com/Microsoft/cognitive-services-speech-sdk-go#readme-reference).-- **C++/C#/Jav#prebuilt-neural-voices) to change the default.-- **C++/C#/Java/Python/Objective-C/Go**: We added a Gender property to the synthesis voice info to make it easier to select voices based on gender. This addresses [GitHub issue #1055](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1055).-- **C++, C#, Java, JavaScript**: We now support `retrieveEnrollmentResultAsync`, `getAuthorizationPhrasesAsync` and `getAllProfilesAsync()` in Speaker Recognition to ease user management of all voice profiles for a given account. See documentation for [C++](/cpp/cognitive-services/speech/speaker-voiceprofileclient), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speaker.voiceprofileclient), [Java](/java/api/com.microsoft.cognitiveservices.speech.voiceprofileclient), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/voiceprofileclient). This addresses [GitHub issue #338](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/338).-- **JavaScript**: We added retry for connection failures that will make your JavaScript based speech applications more robust.-
-#### Improvements
--- Linux and Android Speech SDK binaries have been updated to use the latest version of OpenSSL (1.1.1k)-- Code Size improvements:
- - Language Understanding is now split into a separate "lu" library.
- - Windows x64 core binary size decreased by 14.4%.
- - Android ARM64 core binary size decreased by 13.7%.
- - other components also decreased in size.
-
-#### Bug fixes
--- **All**: Fixed [GitHub issue #842](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/842) for ServiceTimeout. You can now transcribe very long audio files using the Speech SDK without the connection to the service terminating with this error. However, we still recommend you use [batch transcription](batch-transcription.md) for long files.-- **C#**: Fixed [GitHub issue #947](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/947) where no speech input could leave your app in a bad state.-- **Java**: Fixed [GitHub Issue #997](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/997) where the Java Speech SDK 1.16 crashes when using DialogServiceConnector without a network connection or an invalid subscription key.-- Fixed a crash when abruptly stopping speech recognition (e.g. using CTRL+C on console app).-- **Java**: Added a fix to delete temporary files on Windows when using Java Speech SDK.-- **Java**: Fixed [GitHub issue #994](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/994) where calling `DialogServiceConnector.stopListeningAsync` could result in an error.-- **Java**: Fixed a customer issue in the [virtual assistant quickstart](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/jre/virtual-assistant).-- **JavaScript**: Fixed [GitHub issue #366](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/366) where `ConversationTranslator` threw an error 'this.cancelSpeech is not a function'.-- **JavaScript**: Fixed [GitHub issue #298](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/298) where 'Get result as an in-memory stream' sample played sound out loud.-- **JavaScript**: Fixed [GitHub issue #350](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/350) where calling `AudioConfig` could result in a 'ReferenceError: MediaStream is not defined'.-- **JavaScript**: Fixed an UnhandledPromiseRejection warning in Node.js for long-running sessions.-
-#### Samples
--- Updated Unity samples documentation for macOS [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk).-- A React Native sample for the Cognitive Services speech recognition service is now available [here](https://github.com/microsoft/cognitive-services-sdk-react-native-example).-
-## Speech CLI (also known as SPX): 2021-May release
-
->[!NOTE]
->Get started with the Azure Speech service command line interface (CLI) [here](spx-basics.md). The CLI enables you to use the Azure Speech service without writing any code.
-
-#### New features
--- SPX now supports Profile, Speaker ID and Speaker verification - Try `spx profile` and `spx speaker` from the SPX command line.-- We also added Dialog support - Try `spx dialog` from the SPX command line.-- SPX help improvements. Please give us feedback about how this works for you by opening a [GitHub issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).-- We've decreased the size of the SPX .NET tool install.-
-**COVID-19 abridged testing**:
-
-As the ongoing pandemic continues to require our engineers to work from home, pre-pandemic manual verification scripts have been significantly reduced. We test on fewer devices with fewer configurations, and the likelihood of environment-specific bugs slipping through may be increased. We still rigorously validate with a large set of automation. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Text-to-Speech 2021-April release
-
-**Neural TTS is available across 21 regions**
--- **Twelve new regions added** - Neural TTS is now available in these new 12 regions: `Japan East`, `Japan West`, `Korea Central`, `North Central US`, `North Europe`, `South Central US`, `Southeast Asia`, `UK South`, `west Central US`, `West Europe`, `West US`, `West US 2`. Check [here](regions.md#text-to-speech) for full list of 21 supported regions.-
-## Text-to-Speech 2021-March release
-
-**New languages and voices added for neural TTS**
--- **Six new languages introduced** - 12 new voices in 6 new locales are added into the neural TTS language list: Nia in `cy-GB` Welsh (United Kingdom), Aled in `cy-GB` Welsh (United Kingdom), Rosa in `en-PH` English (Philippines), James in `en-PH` English (Philippines), Charline in `fr-BE` French (Belgium), Gerard in `fr-BE` French (Belgium), Dena in `nl-BE` Dutch (Belgium), Arnaud in `nl-BE` Dutch (Belgium), Polina in `uk-UA` Ukrainian (Ukraine), Ostap in `uk-UA` Ukrainian (Ukraine), Uzma in `ur-PK` Urdu (Pakistan), Asad in `ur-PK` Urdu (Pakistan).--- **Five languages from preview to GA** - 10 voices in 5 locales introduced in 2020-November now are GA: Kert in `et-EE` Estonian (Estonia), Colm in `ga-IE` Irish (Ireland), Nils in `lv-LV` Latvian (Latvia), Leonas in `lt-LT` Lithuanian (Lithuania), Joseph in `mt-MT` Maltese (Malta).--- **New male voice added for French (Canada)** - A new voice Antoine is available for `fr-CA` French (Canada).--- **Quality improvement** - Pronunciation error rate reduction on `hu-HU` Hungarian - 48.17%, `nb-NO` Norwegian - 52.76%, `nl-NL` Dutch (Netherlands) - 22.11%.-
-With this release, we now support a total of 142 neural voices across 60 languages/locales. In addition, over 70 standard voices are available in 49 languages/locales. Visit [Language support](language-support.md#text-to-speech) for the full list.
-
-**Get facial pose events to animate characters**
-
-Neural Text-to-Speech now includes the [viseme event](how-to-speech-synthesis-viseme.md). Viseme events allow users to get a sequence of facial poses along with synthesized speech. Visemes can be used to control the movement of 2D and 3D avatar models, matching mouth movements to synthesized speech. Viseme events are only available for `en-US-AriaNeural` voice at this time.
-
-**Add the bookmark element in Speech Synthesis Markup Language (SSML)**
-
-The [bookmark element](speech-synthesis-markup.md#bookmark-element) allows you to insert custom markers in SSML to get the offset of each marker in the audio stream. It can be used to reference a specific location in the text or tag sequence.
-
-## Speech SDK 1.16.0: 2021-March release
-
-> [!NOTE]
-> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-#### New features
--- **C++/C#/Jav).-- **C++/C#/Java/Objective-C/Python**: Added support for decoding compressed TTS/synthesized audio to the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.propertyid), [Java](/java/api/com.microsoft.cognitiveservices.speech.propertyid), [Objective-C](/objectivec/cognitive-services/speech/spxpropertyid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid).-- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig#fromWavFileInput_File_). This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/252).-- **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices. Details for [C++](/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer#methods), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer#methods), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoiceasync), and [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#methods).-- **C++/C#/Jav).-- **C++/C#/Jav#bookmark-element).-- **Java**: Added support for speaker recognition APIs. Details [here](/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added two new output audio formats with WebM container for TTS (Webm16Khz16BitMonoOpus and Webm24Khz16BitMonoOpus). These are better formats for streaming audio with the Opus codec. Details for [C++](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisoutputformat), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisoutputformat), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisoutputformat), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesisoutputformat), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisoutputformat).-- **C++/C#/Java**: Added support for retrieving voice profile for speaker recognition scenario. Details for [C++](/cpp/cognitive-services/speech/speaker-speakerrecognizer), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speaker.speakerrecognizer), and [Java](/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer).-- **C++/C#/Java/Objective-C/Python**: Added support for separate shared library for audio microphone and speaker control. This allows to use the SDK in environments that do not have required audio library dependencies.-- **Objective-C/Swift**: Added support for module framework with umbrella header. This allows to import Speech SDK as a module in iOS/Mac Objective-C/Swift apps. This addresses [GitHub issue #452](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/452).-- **Python**: Added support for [Python 3.9](./quickstarts/setup-platform.md?pivots=programming-language-python) and dropped support for Python 3.5 per Python's [end-of-life for 3.5](https://devguide.python.org/devcycle/#end-of-life-branches).-
-**Known issues**
--- **C++/C#/Java**: `DialogServiceConnector` cannot use a `CustomCommandsConfig` to access a Custom Commands application and will instead encounter a connection error. This can be worked around by manually adding your application ID to the request with `config.SetServiceProperty("X-CommandsAppId", "your-application-id", ServicePropertyChannel.UriQueryParameter)`. The expected behavior of `CustomCommandsConfig` will be restored in the next release.-
-#### Improvements
--- As part of our multi release effort to reduce the Speech SDK's memory usage and disk footprint, Android binaries are now 3% to 5% smaller.-- Improved accuracy, readability and see-also sections of our C# reference documentation [here](/dotnet/api/microsoft.cognitiveservices.speech).-
-#### Bug fixes
--- **JavaScript**: Large WAV file headers are now parsed correctly (increases header slice to 512 bytes). This addresses [GitHub issue #962](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/962).-- **JavaScript**: Corrected microphone timing issue if mic stream ends before stop recognition, addressing an issue with Speech Recognition not working in Firefox.-- **JavaScript**: We now correctly handle initialization promise when the browser forces mic off before turnOn completes.-- **JavaScript**: We replaced URL dependency with url-parse. This addresses [GitHub issue #264](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/264).-- **Android**: Fixed callbacks not working when `minifyEnabled` is set to true.-- **C++/C#/Java/Objective-C/Python**: `TCP_NODELAY` will be correctly set to underlying socket IO for TTS to reduce latency.-- **C++/C#/Java/Python/Objective-C/Go**: Fixed an occasional crash when the recognizer was destroyed just after starting a recognition.-- **C++/C#/Java**: Fixed an occasional crash in the destruction of speaker recognizer.-
-#### Samples
--- **JavaScript**: [Browser samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser) no longer require separate JavaScript library file download.-
-## Speech CLI (also known as SPX): 2021-March release
-
-> [!NOTE]
-> Get started with the Azure Speech service command line interface (CLI) [here](spx-basics.md). The CLI enables you to use the Azure Speech service without writing any code.
-
-#### New features
--- Added `spx intent` command for intent recognition, replacing `spx recognize intent`.-- Recognize and intent can now use Azure functions to calculate word error rate using `spx recognize --wer url <URL>`.-- Recognize can now output results as VTT files using `spx recognize --output vtt file <FILENAME>`.-- Sensitive key info now obscured in debug/verbose output.-- Added URL checking and error message for content field in batch transcription create.-
-**COVID-19 abridged testing**:
-
-As the ongoing pandemic continues to require our engineers to work from home, pre-pandemic manual verification scripts have been significantly reduced. We test on fewer devices with fewer configurations, and the likelihood of environment-specific bugs slipping through may be increased. We still rigorously validate with a large set of automation. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Text-to-Speech 2021-February release
-
-**Custom Neural Voice GA**
-
-Custom Neural Voice is GA in February in 13 languages: Chinese (Mandarin, Simplified), English (Australia), English (India), English (United Kingdom), English (United States), French (Canada), French (France), German (Germany), Italian (Italy), Japanese (Japan), Korean (Korea), Portuguese (Brazil), Spanish (Mexico), and Spanish (Spain). Learn more about [what is Custom Neural Voice](custom-neural-voice.md) and [how to use it responsibly](concepts-guidelines-responsible-deployment-synthetic.md).
-Custom Neural Voice feature requires registration and Microsoft may limit access based on Microsoft's eligibility criteria. Learn more about the [limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context).
-
-## Speech SDK 1.15.0: 2021-January release
-
-> [!NOTE]
-> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-**Highlights summary**
-- Smaller memory and disk footprint making the SDK more efficient.-- Higher fidelity output formats available for custom neural voice private preview.-- Intent Recognizer can now get return more than the top intent, giving you the ability to make a separate assessment about your customer's intent.-- Your voice assistant or bot are now easier to set up, and you can make it stop listening immediately, and exercise greater control over how it responds to errors.-- Improved on device performance through making compression optional.-- Use the Speech SDK on Windows ARM/ARM64.-- Improved low level debugging.-- Pronunciation assessment feature is now more widely available.-- Several Bug fixes to address issues YOU, our valued customers, have flagged on GitHub! THANK YOU! Keep the feedback coming!-
-**Improvements**
-- The Speech SDK is now more efficient and lightweight. We have started a multi release effort to reduce the Speech SDK's memory usage and disk footprint. As a first step we made significant file size reductions in shared libraries on most platforms. Compared to the 1.14 release:
- - 64-bit UWP-compatible Windows libraries are about 30% smaller.
- - 32-bit Windows libraries are not yet seeing a size improvements.
- - Linux libraries are 20-25% smaller.
- - Android libraries are 3-5% smaller.
-
-**New features**
-- **All**: New 48KHz output formats available for the private preview of custom neural voice through the TTS speech synthesis API: Audio48Khz192KBitRateMonoMp3, audio-48khz-192kbitrate-mono-mp3, Audio48Khz96KBitRateMonoMp3, audio-48khz-96kbitrate-mono-mp3, Raw48Khz16BitMonoPcm, raw-48khz-16bit-mono-pcm, Riff48Khz16BitMonoPcm, riff-48khz-16bit-mono-pcm.-- **All**: Custom voice is also easier to use. Added support for setting custom voice via `EndpointId` ([C++](/cpp/cognitive-services/speech/speechconfig#setendpointid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.endpointid#Microsoft_CognitiveServices_Speech_SpeechConfig_EndpointId), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setendpointid#com_microsoft_cognitiveservices_speech_SpeechConfig_setEndpointId_String_), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#endpointId), [Objective-C](/objectivec/cognitive-services/speech/spxspeechconfiguration#endpointid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#endpoint-id)). Before this change, custom voice users needed to set the endpoint URL via the `FromEndpoint` method. Now customers can use the `FromSubscription` method just like prebuilt voices, and then provide the deployment ID by setting `EndpointId`. This simplifies setting up custom voices.-- **C++/C#/Jav#add-a-languageunderstandingmodel-and-intents).-- **C++/C#/Java**: Make your voice assistant or bot stop listening immediately. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector)) now has a `StopListeningAsync()` method to accompany `ListenOnceAsync()`. This will immediately stop audio capture and gracefully wait for a result, making it perfect for use with "stop now" button-press scenarios.-- **C++/C#/Java/JavaScript**: Make your voice assistant or bot react better to underlying system errors. `DialogServiceConnector` ([C++](/cpp/cognitive-services/speech/dialog-dialogserviceconnector), [C#](/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [Java](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector)) now has a new `TurnStatusReceived` event handler. These optional events correspond to every [`ITurnContext`](/dotnet/api/microsoft.bot.builder.iturncontext) resolution on the Bot and will report turn execution failures when they happen, e.g. as a result of an unhandled exception, timeout, or network drop between Direct Line Speech and the bot. `TurnStatusReceived` makes it easier to respond to failure conditions. For example, if a bot takes too long on a backend database query (e.g. looking up a product), `TurnStatusReceived` allows the client to know to reprompt with "sorry, I didn't quite get that, could you please try again" or something similar.-- **C++/C#**: Use the Speech SDK on more platforms. The [Speech SDK NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) now supports Windows ARM/ARM64 desktop native binaries (UWP was already supported) to make the Speech SDK more useful on more machine types.-- **Java**: [`DialogServiceConnector`](/java/api/com.microsoft.cognitiveservices.speech.dialog.dialogserviceconnector) now has a `setSpeechActivityTemplate()` method that was unintentionally excluded from the language previously. This is equivalent to setting the `Conversation_Speech_Activity_Template` property and will request that all future Bot Framework activities originated by the Direct Line Speech service merge the provided content into their JSON payloads.-- **Java**: Improved low level debugging. The [`Connection`](/java/api/com.microsoft.cognitiveservices.speech.connection) class now has a `MessageReceived` event, similar to other programing languages (C++, C#). This event provides low-level access to incoming data from the service and can be useful for diagnostics and debugging.-- **JavaScript**: Easier setup for Voice Assistants and bots through [`BotFrameworkConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/botframeworkconfig), which now has `fromHost()` and `fromEndpoint()` factory methods that simplify the use of custom service locations versus manually setting properties. We also standardized optional specification of `botId` to use a non-default bot across the configuration factories.-- **JavaScript**: Improved on device performance through added string control property for websocket compression. For performance reasons we disabled websocket compression by default. This can be reenabled for low bandwidth scenarios. More details [here](/javascript/api/microsoft-cognitiveservices-speech-sdk/propertyid). This addresses [GitHub issue #242](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/242).-- **JavaScript**: Added support for pronunciation assessment to enable evaluation of speech pronunciation. See the quickstart [here](./how-to-pronunciation-assessment.md?pivots=programming-language-javascript).-
-**Bug fixes**
-- **All** (except JavaScript): Fixed a regression in version 1.14, in which too much memory was allocated by the recognizer.-- **C++**: Fixed a garbage collection issue with `DialogServiceConnector`, addressing [GitHub issue #794](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/794).-- **C#**: Fixed an issue with thread shutdown that caused objects to block for about a second when disposed.-- **C++/C#/Java**: Fixed an exception preventing an application from setting speech authorization token or activity template more than once on a `DialogServiceConnector`.-- **C++/C#/Java**: Fixed a recognizer crash due to a race condition in teardown.-- **JavaScript**: [`DialogServiceConnector`](/javascript/api/microsoft-cognitiveservices-speech-sdk/dialogserviceconnector) did not previously honor the optional `botId` parameter specified in `BotFrameworkConfig`'s factories. This made it necessary to set the `botId` query string parameter manually to use a non-default bot. The bug has been corrected and `botId` values provided to `BotFrameworkConfig`'s factories will be honored and used, including the new `fromHost()` and `fromEndpoint()` additions. This also applies to the `applicationId` parameter for `CustomCommandsConfig`.-- **JavaScript**: Fixed [GitHub issue #881](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/881), allowing recognizer object re-usage.-- **JavaScript**: Fixed an issue where the SKD was sending `speech.config` multiple times in one TTS session, wasting bandwidth.-- **JavaScript**: Simplified error handling on microphone authorization, allowing more descriptive message to bubble up when user has not allowed microphone input on their browser.-- **JavaScript**: Fixed [GitHub issue #249](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/249) where type errors in `ConversationTranslator` and `ConversationTranscriber` caused a compilation error for TypeScript users.-- **Objective-C**: Fixed an issue where GStreamer build failed for iOS on Xcode 11.4, addressing [GitHub issue #911](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/911).-- **Python**: Fixed [GitHub issue #870](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/870), removing "DeprecationWarning: the imp module is deprecated in favor of importlib".-
-**Samples**
-- [From-file sample for JavaScript browser](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/javascript/browser/from-file/https://docsupdatetracker.net/index.html) now uses files for speech recognition. This addresses [GitHub issue #884](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/884).-
-## Speech CLI (also known as SPX): 2021-January release
-
-**New features**
-- Speech CLI is now available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech.CLI/) and can be installed via .NET CLI as a .NET global tool you can call from the shell/command line.-- The [Custom Speech DevOps Template repo](https://github.com/Azure-Samples/Speech-Service-DevOps-Template) has been updated to use Speech CLI for its Custom Speech workflows.-
-**COVID-19 abridged testing**:
-As the ongoing pandemic continues to require our engineers to work from home, pre-pandemic manual verification scripts have been significantly reduced. We test on fewer devices with fewer configurations, and the likelihood of environment-specific bugs slipping through may be increased. We still rigorously validate with a large set of automation. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Text-to-Speech 2020-December release
-
-**New neural voices in GA and preview**
-
-Released 51 new voices for a total of 129 neural voices across 54 languages/locales:
--- **46 new voices in GA locales**: Shakir in `ar-EG` Arabic (Egypt), Hamed in `ar-SA` Arabic (Saudi Arabia), Borislav in `bg-BG` Bulgarian (Bulgaria), Joana in `ca-ES` Catalan (Spain), Antonin in `cs-CZ` Czech (Czech Republic), Jeppe in `da-DK` Danish (Denmark), Jonas in `de-AT` German (Austria), Jan in `de-CH` German (Switzerland), Nestoras in `el-GR` Greek (Greece), Liam in `en-CA` English (Canada), Connor in `en-IE` English (Ireland), Madhur in `en-IN` Hindi (India), Mohan in `en-IN` Telugu (India), Prabhat in `en-IN` English (India), Valluvar in `en-IN` Tamil (India), Enric in `es-ES` Catalan (Spain), Kert in `et-EE` Estonian (Estonia), Harri in `fi-FI` Finnish (Finland), Selma in `fi-FI` Finnish (Finland), Fabrice in `fr-CH` French (Switzerland), Colm in `ga-IE` Irish (Ireland), Avri in `he-IL` Hebrew (Israel), Srecko in `hr-HR` Croatian (Croatia), Tamas in `hu-HU` Hungarian (Hungary), Gadis in `id-ID` Indonesian (Indonesia), Leonas in `lt-LT` Lithuanian (Lithuania), Nils in `lv-LV` Latvian (Latvia), Osman in `ms-MY` Malay (Malaysia), Joseph in `mt-MT` Maltese (Malta), Finn in `nb-NO` Norwegian, Bokmål (Norway), Pernille in `nb-NO` Norwegian, Bokmål (Norway), Fenna in `nl-NL` Dutch (Netherlands), Maarten in `nl-NL` Dutch (Netherlands), Agnieszka in `pl-PL` Polish (Poland), Marek in `pl-PL` Polish (Poland), Duarte in `pt-BR` Portuguese (Brazil), Raquel in `pt-PT` Portuguese (Potugal), Emil in `ro-RO` Romanian (Romania), Dmitry in `ru-RU` Russian (Russia), Svetlana in `ru-RU` Russian (Russia), Lukas in `sk-SK` Slovak (Slovakia), Rok in `sl-SI` Slovenian (Slovenia), Mattias in `sv-SE` Swedish (Sweden), Sofie in `sv-SE` Swedish (Sweden), Niwat in `th-TH` Thai (Thailand), Ahmet in `tr-TR` Turkish (Turkey), NamMinh in `vi-VN` Vietnamese (Vietnam), HsiaoChen in `zh-TW` Taiwanese Mandarin (Taiwan), YunJhe in `zh-TW` Taiwanese Mandarin (Taiwan), HiuMaan in `zh-HK` Chinese Cantonese (Hong Kong), WanLung in `zh-HK` Chinese Cantonese (Hong Kong).--- **5 new voices in preview locales**: Kert in `et-EE` Estonian (Estonia), Colm in `ga-IE` Irish (Ireland), Nils in `lv-LV` Latvian (Latvia), Leonas in `lt-LT` Lithuanian (Lithuania), Joseph in `mt-MT` Maltese (Malta).-
-With this release, we now support a total of 129 neural voices across 54 languages/locales. In addition, over 70 standard voices are available in 49 languages/locales. Visit [Language support](language-support.md#text-to-speech) for the full list.
-
-**Updates for Audio Content Creation**
-- Improved voice selection UI with voice categories and detailed voice descriptions.-- Enabled intonation tuning for all neural voices across different languages.-- Automated the UI localization based on the language of the browser.-- Enabled `StyleDegree` controls for all `zh-CN` Neural voices.
-Visit the [Audio Content Creation tool](https://speech.microsoft.com/audiocontentcreation) to check out the new features.
-
-**Updates for zh-CN voices**
-- Updated all `zh-CN` neural voices to support English speaking.-- Enabled all `zh-CN` neural voices to support intonation adjustment. SSML or Audio Content Creation tool can be used to adjust for the best intonation.-- Updated all `zh-CN` multi-style neural voices to support `StyleDegree` control. Emotion intensity (soft or strong) is adjustable.-- Updated `zh-CN-YunyeNeural` to support multiple styles which can perform different emotions.-
-## Text-to-Speech 2020-November release
-
-**New locales and voices in preview**
-- **Five new voices and languages** are introduced to the Neural TTS portfolio. They are: Grace in Maltese (Malta), Ona in Lithuanian (Lithuania), Anu in Estonian (Estonia), Orla in Irish (Ireland) and Everita in Latvian (Latvia).-- **Five new `zh-CN` voices with multiple styles and roles support**: Xiaohan, Xiaomo, Xiaorui, Xiaoxuan and Yunxi.-
-> These voices are available in public preview in three Azure regions: EastUS, SouthEastAsia and WestEurope.
-
-**Neural TTS Container GA**
-- With Neural TTS Container, developers can run speech synthesis with the most natural digital voices in their own environment for specific security and data governance requirements. Check [how to install Speech Containers](speech-container-howto.md).-
-**New features**
-- **Custom Voice**: enabled users to copy a voice model from one region to another; supported endpoint suspension and resuming. Go to the [portal](https://speech.microsoft.com/customvoice) here.-- [SSML silence tag](speech-synthesis-markup.md#add-silence) support.-- General TTS voice quality improvements: Improved word-level pronunciation accuracy in nb-NO. Reduced 53% pronunciation error.-
-> Read more at [this tech blog](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604).
-
-## Text-to-Speech 2020-October release
-
-**New features**
-- Jenny supports a new `newscast` style. See [how to use the speaking styles in SSML](speech-synthesis-markup.md#adjust-speaking-styles).-- **Neural voices upgraded to HiFiNet vocoder, with higher audio fidelity and faster synthesis speed**. This benefits customers whose scenario relies on hi-fi audio or long interactions, including video dubbing, audio books, or online education materials. [Read more about the story and hear the voice samples on our tech community blog](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860)-- **[Custom Voice](https://speech.microsoft.com/customvoice) & [Audio Content Creation Studio](https://speech.microsoft.com/audiocontentcreation) localized to 17 locales**. Users can easily switch the UI to a local language for a more friendly experience.-- **Audio Content Creation**: Added style degree control for XiaoxiaoNeural; Refined the customized break feature to include incremental breaks of 50ms.-
-**General TTS voice quality improvements**
-- Improved word-level pronunciation accuracy in `pl-PL` (error rate reduction: 51%) and `fi-FI` (error rate reduction: 58%)-- Improved `ja-JP` single word reading for the dictionary scenario. Reduced pronunciation error by 80%.-- `zh-CN-XiaoxiaoNeural`: Improved sentiment/CustomerService/Newscast/Cheerful/Angry style voice quality.-- `zh-CN`: Improved Erhua pronunciation and light tone and refined space prosody, which greatly improves intelligibility.-
-## Speech SDK 1.14.0: 2020-October release
-
-> [!NOTE]
-> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download it [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-**New features**
-- **Linux**: Added support for Debian 10 and Ubuntu 20.04 LTS.-- **Python/Objective-C**: Added support for the `KeywordRecognizer` API. Documentation will be [here](./custom-keyword-basics.md).-- **C++/Java/C#**: Added support to set any `HttpHeader` key/value via `ServicePropertyChannel::HttpHeader`.-- **JavaScript**: Added support for the `ConversationTranscriber` API. Read documentation [here](./how-to-use-conversation-transcription.md?pivots=programming-language-javascript).-- **C++/C#**: Added new `AudioDataStream FromWavFileInput` method (to read .WAV files) [here (C++)](/cpp/cognitive-services/speech/audiodatastream) and [here (C#)](/dotnet/api/microsoft.cognitiveservices.speech.audiodatastream).-- **C++/C#/Java/Python/Objective-C/Swift**: Added a `stopSpeakingAsync()` method to stop Text-to-Speech synthesis. Read the Reference documentation [here (C++)](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace), [here (C#)](/dotnet/api/microsoft.cognitiveservices.speech), [here (Java)](/java/api/com.microsoft.cognitiveservices.speech), [here (Python)](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech), and [here (Objective-C/Swift)](/objectivec/cognitive-services/speech/).-- **C#, C++, Java**: Added a `FromDialogServiceConnector()` function to the `Connection` class that can be used to monitor connection and disconnection events for `DialogServiceConnector`. Read the Reference documentation [here (C#)](/dotnet/api/microsoft.cognitiveservices.speech.connection), [here (C++)](/cpp/cognitive-services/speech/connection), and [here (Java)](/java/api/com.microsoft.cognitiveservices.speech.connection).-- **C++/C#/Jav).-
-**Breaking change**
-- **JavaScript**: PullAudioOutputStream.read() has a return type change from an internal Promise to a Native JavaScript Promise.-
-**Bug fixes**
-- **All**: Fixed 1.13 regression in `SetServiceProperty` where values with certain special characters were ignored.-- **C#**: Fixed Windows console samples on Visual Studio 2019 failing to find native DLLs.-- **C#**: Fixed crash with memory management if stream is used as `KeywordRecognizer` input.-- **ObjectiveC/Swift**: Fixed crash with memory management if stream is used as recognizer input.-- **Windows**: Fixed co-existence issue with BT HFP/A2DP on UWP.-- **JavaScript**: Fixed mapping of session IDs to improve logging and aid in internal debug/service correlations.-- **JavaScript**: Added fix for `DialogServiceConnector` disabling `ListenOnce` calls after the first call is made.-- **JavaScript**: Fixed issue where result output would only ever be "simple".-- **JavaScript**: Fixed continuous recognition issue in Safari on macOS.-- **JavaScript**: CPU load mitigation for high request throughput scenario.-- **JavaScript**: Allow access to details of Voice Profile Enrollment result.-- **JavaScript**: Added fix for continuous recognition in `IntentRecognizer`.-- **C++/C#/Java/Python/Swift/ObjectiveC**: Fixed incorrect url for australiaeast and brazilsouth in `IntentRecognizer`.-- **C++/C#**: Added `VoiceProfileType` as an argument when creating a `VoiceProfile` object.-- **C++/C#/Java/Python/Swift/ObjectiveC**: Fixed potential `SPX_INVALID_ARG` when trying to read `AudioDataStream` from a given position.-- **IOS**: Fixed crash with speech recognition on Unity-
-**Samples**
-- **ObjectiveC**: Added sample for keyword recognition [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/objective-c/ios/speech-samples).-- **C#/JavaScript**: Added quickstart for conversation transcription [here (C#)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/dotnet/conversation-transcription) and [here (JavaScript)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/node/conversation-transcription).-- **C++/C#/Java/Python/Swift/ObjectiveC**: Added sample for pronunciation assessment [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples)-- **Xamarin**: Updated quickstart to latest Visual Studio template [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/xamarin).-
-**Known Issue**
-- DigiCert Global Root G2 certificate is not supported by default in HoloLens 2 and Android 4.4 (KitKat) and needs to be added to the system to make the Speech SDK functional. The certificate will be added to HoloLens 2 OS images in the near future. Android 4.4 customers need to add the updated the certificate to the system.-
-**COVID-19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual verification testing as we normally do. We haven't made any changes we think could have broken anything, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Speech CLI (also known as SPX): 2020-October release
-SPX is the command line interface to use the Azure Speech service without writing code.
-Download the latest version [here](./spx-basics.md). <br>
-
-**New features**
-- `spx csr dataset upload --kind audio|language|acoustic` ΓÇô create datasets from local data, not just from URLs.-- `spx csr evaluation create|status|list|update|delete` ΓÇô compare new models against baseline truth/other models.-- `spx * list` ΓÇô supports non-paged experience (doesn't require --top X --skip X).-- `spx * --http header A=B` ΓÇô support custom headers (added for Office for custom authentication).-- `spx help` ΓÇô improved text and back-tick text color coded (blue).-
-## Text-to-Speech 2020-September release
-
-### New features
-
-* **Neural TTS**
- * **Extended to support 18 new languages/locales.** They are Bulgarian, Czech, German (Austria), German (Switzerland), Greek, English (Ireland), French (Switzerland), Hebrew, Croatian, Hungarian, Indonesian, Malay, Romanian, Slovak, Slovenian, Tamil, Telugu and Vietnamese.
- * **Released 14 new voices to enrich the variety in the existing languages.** See [full language and voice list](language-support.md#prebuilt-neural-voices).
- * **New speaking styles for `en-US` and `zh-CN` voices.** Jenny, the new voice in English (US), supports chatbot, customer service, and assistant styles. 10 new speaking styles are available with our zh-CN voice, XiaoXiao. In addition, the XiaoXiao neural voice supports `StyleDegree` tuning. See [how to use the speaking styles in SSML](speech-synthesis-markup.md#adjust-speaking-styles).
-
-* **Containers: Neural TTS Container released in public preview with 16 voices available in 14 languages.** Learn more on [how to deploy Speech Containers for Neural TTS](speech-container-howto.md)
-
-Read the [full announcement of the TTS updates for Ignite 2020](https://techcommunity.microsoft.com/t5/azure-ai/ignite-2020-neural-tts-updates-new-language-support-more-voices/ba-p/1698544)
-
-## Text-to-Speech 2020-August release
-
-### New features
-
-* **Neural TTS: new speaking style for `en-US` Aria voice**. AriaNeural can sound like a news caster when reading news. The 'newscast-formal' style sounds more serious, while the 'newscast-casual' style is more relaxed and informal. See [how to use the speaking styles in SSML](speech-synthesis-markup.md).
-
-* **Custom Voice: a new feature is released to automatically check training data quality**. When you upload your data, the system will examine various aspects of your audio and transcript data, and automatically fix or filter issues to improve the quality of the voice model. This covers the volume of your audio, the noise level, the pronunciation accuracy of speech, the alignment of speech with the normalized text, silence in the audio, in addition to the audio and script format.
-
-* **Audio Content Creation: a set of new features to enable more powerful voice tuning and audio management capabilities**.
-
- * Pronunciation: the pronunciation tuning feature is updated to the latest phoneme set. You can pick the right phoneme element from the library and refine the pronunciation of the words you have selected.
-
- * Download: The audio "Download"/"Export" feature is enhanced to support generating audio by paragraph. You can edit content in the same file/SSML, while generating multiple audio outputs. The file structure of "Download" is refined as well. Now, you can easily get all audio files in one folder.
-
- * Task status : The multi-file export experience is improved. When you export multiple files in the past, if one of the files has failed, the entire task will fail. But now, all other files will be successfully exported. The task report is enriched with more detailed and structured information. You can check the logs for all failed files and sentences now with the report.
-
- * SSML documentation: linked to SSML document to help you check the rules for how to use all tuning features.
-
-* **The Voice List API is updated to include a user-friendly display name and the speaking styles supported for neural voices**.
-
-### General TTS voice quality improvements
-
-* Reduced word-level pronunciation error % for `ru-RU` (errors reduced by 56%) and `sv-SE` (errors reduced by 49%)
-
-* Improved polyphony word reading on `en-US` neural voices by 40%. Examples of polyphony words include "read", "live", "content", "record", "object", etc.
-
-* Improved the naturalness of the question tone in `fr-FR`. MOS (Mean Opinion Score) gain: +0.28
-
-* Updated the vocoders for the following voices, with fidelity improvements and overall performance speed-up by 40%.
-
- | Locale | Voice |
- |||
- | `en-GB` | Mia |
- | `es-MX` | Dalia |
- | `fr-CA` | Sylvie |
- | `fr-FR` | Denise |
- | `ja-JP` | Nanami |
- | `ko-KR` | Sun-Hi |
-
-### Bug fixes
-
-* Fixed a number of bugs with the Audio Content Creation tool
- * Fixed issue with auto refreshing.
- * Fixed issues with voice styles in zh-CN in the South East Asia region.
- * Fixed stability issue, including an export error with the 'break' tag, and errors in punctuation.
-
-## New speech-to-text locales: 2020-August Release
-Speech-to-text released 26 new locales in August: 2 European languages `cs-CZ` and `hu-HU`, 5 English locales and 19 Spanish locales that cover most South American countries. Below is a list of the new locales. See the complete language list [here](./language-support.md).
-
-| Locale | Language |
-||--|
-| `cs-CZ` | Czech (Czech Republic) |
-| `en-HK` | English (Hong Kong) |
-| `en-IE` | English (Ireland) |
-| `en-PH` | English (Philippines) |
-| `en-SG` | English (Singapore) |
-| `en-ZA` | English (South Africa) |
-| `es-AR` | Spanish (Argentina) |
-| `es-BO` | Spanish (Bolivia) |
-| `es-CL` | Spanish (Chile) |
-| `es-CO` | Spanish (Colombia) |
-| `es-CR` | Spanish (Costa Rica) |
-| `es-CU` | Spanish (Cuba) |
-| `es-DO` | Spanish (Dominican Republic) |
-| `es-EC` | Spanish (Ecuador) |
-| `es-GT` | Spanish (Guatemala) |
-| `es-HN` | Spanish (Honduras) |
-| `es-NI` | Spanish (Nicaragua) |
-| `es-PA` | Spanish (Panama) |
-| `es-PE` | Spanish (Peru) |
-| `es-PR` | Spanish (Puerto Rico) |
-| `es-PY` | Spanish (Paraguay) |
-| `es-SV` | Spanish (El Salvador) |
-| `es-US` | Spanish (USA) |
-| `es-UY` | Spanish (Uruguay) |
-| `es-VE` | Spanish (Venezuela) |
-| `hu-HU` | Hungarian (Hungary) |
--
-## Speech SDK 1.13.0: 2020-July release
-
-> [!NOTE]
-> The Speech SDK on Windows depends on the shared Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019. Download and install it from [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
-
-**New features**
-- **C#**: Added support for asynchronous conversation transcription. See documentation [here](./how-to-async-conversation-transcription.md).-- **JavaScript**: Added Speaker Recognition support for both [browser](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/browser/speaker-recognition) and [node.js](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/node/speaker-recognition).-- **JavaScript**: Added support for language identification/language ID. See documentation [here](./how-to-automatic-language-detection.md?pivots=programming-language-javascript).-- **Objective-C**: Added support for [multi-device conversation](./multi-device-conversation.md) and [conversation transcription](./conversation-transcription.md).-- **Python**: Added compressed audio support for Python on Windows and Linux. See documentation [here](./how-to-use-codec-compressed-audio-input-streams.md).-
-**Bug fixes**
-- **All**: Fixed an issue that caused the KeywordRecognizer to not move forward the streams after a recognition.-- **All**: Fixed an issue that caused the stream obtained from a KeywordRecognitionResult to not contain the keyword.-- **All**: Fixed an issue that the SendMessageAsync does not really send the message over the wire after the users finish waiting for it.-- **All**: Fixed a crash in Speaker Recognition APIs when users call VoiceProfileClient::SpeakerRecEnrollProfileAsync method multiple times and did not wait for the calls to finish.-- **All**: Fixed enable file logging in VoiceProfileClient and SpeakerRecognizer classes.-- **JavaScript**: Fixed an [issue](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/74) with throttling when browser is minimized.-- **JavaScript**: Fixed an [issue](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/78) with a memory leak on streams.-- **JavaScript**: Added caching for OCSP responses from NodeJS.-- **Java**: Fixed an issue that was causing BigInteger fields to always return 0.-- **iOS**: Fixed an [issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/702) with publishing Speech SDK-based apps in the iOS App Store.-
-**Samples**
-- **C++**: Added sample code for Speaker Recognition [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/cpp/windows/console/samples/speaker_recognition_samples.cpp).-
-**COVID-19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual verification testing as we normally do. We haven't made any changes we think could have broken anything, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Text-to-Speech 2020-July release
-
-### New features
-
-* **Neural TTS, 15 new neural voices**: The new voices added to the Neural TTS portfolio are Salma in `ar-EG` Arabic (Egypt), Zariyah in `ar-SA` Arabic (Saudi Arabia), Alba in `ca-ES` Catalan (Spain), Christel in `da-DK` Danish (Denmark), Neerja in `es-IN` English (India), Noora in `fi-FI` Finnish (Finland), Swara in `hi-IN` Hindi (India), Colette in `nl-NL` Dutch (Netherlands), Zofia in `pl-PL` Polish (Poland), Fernanda in `pt-PT` Portuguese (Portugal), Dariya in `ru-RU` Russian (Russia), Hillevi in `sv-SE` Swedish (Sweden), Achara in `th-TH` Thai (Thailand), HiuGaai in `zh-HK` Chinese (Cantonese, Traditional) and HsiaoYu in `zh-TW` Chinese (Taiwanese Mandarin). Check all [supported languages](./language-support.md#prebuilt-neural-voices).
-
-* **Custom Voice, streamlined voice testing with the training flow to simplify user experience**: With the new testing feature, each voice will be automatically tested with a predefined test set optimized for each language to cover general and voice assistant scenarios. These test sets are carefully selected and tested to include typical use cases and phonemes in the language. Besides, users can still select to upload their own test scripts when training a model.
-
-* **Audio Content Creation: a set of new features is released to enable more powerful voice tuning and audio management capabilities**
-
- * `Pitch`, `rate`, and `volume` are enhanced to support tuning with a predefined value, like slow, medium and fast. It's now straightforward for users to pick a 'constant' value for their audio editing.
-
- ![Audio tuning](media/release-notes/audio-tuning.png)
-
- * Users can now review the `Audio history` for their work file. With this feature, users can easily track all the generated audio related to a working file. They can check the history version and compare the quality while tuning at the same time.
-
- ![Audio history](media/release-notes/audio-history.png)
-
- * The `Clear` feature is now more flexible. Users can clear a specific tuning parameter while keeping other parameters available for the selected content.
-
- * A tutorial video was added on the [landing page](https://speech.microsoft.com/audiocontentcreation) to help users quickly get started with TTS voice tuning and audio management.
-
-### General TTS voice quality improvements
-
-* Improved TTS vocoder in for higher fidelity and lower latency.
-
- * Updated Elsa in `it-IT` to a new vocoder which achieved +0.464 CMOS (Comparative Mean Opinion Score) gain in voice quality, 40% faster in synthesis and 30% reduction on first byte latency.
- * Updated Xiaoxiao in `zh-CN` to the new vocoder with +0148 CMOS gain for the general domain, +0.348 for the newscast style and +0.195 for the lyrical style.
-
-* Updated `de-DE` and `ja-JP` voice models to make the TTS output more natural.
-
- * Updated Katja in `de-DE` with the latest prosody modeling method, the MOS (Mean Opinion Score) gain is +0.13.
- * Updated Nanami in `ja-JP` with a new pitch accent prosody model, the MOS (Mean Opinion Score) gain is +0.19;
-
-* Improved word-level pronunciation accuracy in five languages.
-
- | Language | Pronunciation error reduction |
- |||
- | `en-GB` | 51% |
- | `ko-KR` | 17% |
- | `pt-BR` | 39% |
- | `pt-PT` | 77% |
- | `id-ID` | 46% |
-
-### Bug fixes
-
-* Currency reading
- * Fixed the issue with currency reading for `es-ES` and `es-MX`
-
- | Language | Input | Readout after improvement |
- ||||
- | `es-MX` | $1.58 | un peso cincuenta y ocho centavos |
- | `es-ES` | $1.58 | un d├│lar cincuenta y ocho centavos |
-
- * Support for negative currency (like "-325ΓÇ»&euro;" ) in following locales: `en-US`, `en-GB`, `fr-FR`, `it-IT`, `en-AU`, `en-CA`.
-
-* Improved address reading in `pt-PT`.
-* Fixed Natasha (`en-AU`) and Libby (`en-UK`) pronunciation issues on the word "for" and "four".
-* Fixed bugs on Audio Content Creation tool
- * The additional and unexpected pause after the second paragraph is fixed.
- * 'No break' feature is added back from a regression bug.
- * The random refresh issue of Speech Studio is fixed.
-
-### Samples/SDK
-
-* JavaScript: Fixes playback issue in Firefox, and Safari on macOS and iOS.
-
-## Speech SDK 1.12.1: 2020-June release
-**Speech CLI (also known as SPX)**
-- Added in-CLI help search features:
- - `spx help find --text TEXT`
- - `spx help find --topic NAME`
-- Updated to work with newly deployed v3.0 Batch and Custom Speech APIs:
- - `spx help batch examples`
- - `spx help csr examples`
-
-**New features**
-- **C\#, C++**: Speaker Recognition Preview: This feature enables speaker identification (who is speaking?) and speaker verification (is the speaker who they claim to be?). Start with an [overview](./speaker-recognition-overview.md), read the [Speaker Recognition basics article](./get-started-speaker-recognition.md), or the [API reference docs](/rest/api/speakerrecognition/).-
-**Bug fixes**
-- **C\#, C++**: Fixed microphone recording was not working in 1.12 in speaker recognition.-- **JavaScript**: Fixes for Text-To-Speech in Firefox, and Safari on macOS and iOS.-- Fix for Windows application verifier access violation crash on conversation transcription when using eight-channel stream.-- Fix for Windows application verifier access violation crash on multi-device conversation translation.-
-**Samples**
-- **C#**: [Code sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/dotnet/speaker-recognition) for speaker recognition.-- **C++**: [Code sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/cpp/windows/speaker-recognition) for speaker recognition.-- **Java**: [Code sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/intent-recognition) for intent recognition on Android. -
-**COVID-19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual verification testing as we normally do. We haven't made any changes we think could have broken anything, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
--
-## Speech SDK 1.12.0: 2020-May release
-**Speech CLI (Also Know As SPX)**
-- **SPX** is a new command line tool that allows you to perform recognition, synthesis, translation, batch transcription, and custom speech management from the command line. Use it to test the Speech Service, or to script the Speech Service tasks you need to perform. Download the tool and read the documentation [here](./spx-overview.md).-
-**New features**
-- **Go**: New Go language support for [speech recognition](./get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](./quickstarts/voice-assistants.md?pivots=programming-language-go). Set up your dev environment [here](./quickstarts/setup-platform.md?pivots=programming-language-go). For sample code, see the Samples section below.-- **JavaScript**: Added Browser support for Text-To-Speech. See documentation [here](./get-started-text-to-speech.md?pivots=programming-language-JavaScript).-- **C++, C#, Java**: New `KeywordRecognizer` object and APIs supported on Windows, Android, Linux & iOS platforms. Read the documentation [here](./keyword-recognition-overview.md). For sample code, see the Samples section below.-- **Java**: Added multi-device conversation with translation support. See the reference doc [here](/java/api/com.microsoft.cognitiveservices.speech.transcription).-
-**Improvements & Optimizations**
-- **JavaScript**: Optimized browser microphone implementation improving speech recognition accuracy.-- **Java**: Refactored bindings using direct JNI implementation without SWIG. This change reduces by 10x the bindings size for all Java packages used for Windows, Android, Linux and Mac and eases further development of the Speech SDK Java implementation.-- **Linux**: Updated support [documentation](./speech-sdk.md?tabs=linux) with the latest RHEL 7 specific notes.-- Improved connection logic to attempt connecting multiple times when service and network errors occur.-- Updated the [portal.azure.com](https://portal.azure.com) Speech Quickstart page to help developers take the next step in the Azure Speech journey.-
-**Bug fixes**
-- **C#, Java**: Fixed an [issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/587) with loading SDK libraries on Linux ARM (both 32 bit and 64 bit).-- **C#**: Fixed explicit disposal of native handles for TranslationRecognizer, IntentRecognizer and Connection objects.-- **C#**: Fixed audio input lifetime management for ConversationTranscriber object.-- Fixed an issue where `IntentRecognizer` result reason was not set properly when recognizing intents from simple phrases.-- Fixed an issue where `SpeechRecognitionEventArgs` result offset was not set correctly.-- Fixed a race condition where SDK was trying to send a network message before opening the websocket connection. Was reproducible for `TranslationRecognizer` while adding participants.-- Fixed memory leaks in the keyword recognizer engine.-
-**Samples**
-- **Go**: Added quickstarts for [speech recognition](./get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](./quickstarts/voice-assistants.md?pivots=programming-language-go). Find sample code [here](https://github.com/microsoft/cognitive-services-speech-sdk-go/tree/master/samples).-- **JavaScript**: Added quickstarts for [Text-to-Speech](./get-started-text-to-speech.md?pivots=programming-language-javascript), [Translation](./get-started-speech-translation.md?pivots=programming-language-csharp&tabs=script), and [Intent Recognition](./get-started-intent-recognition.md?pivots=programming-language-javascript).-- Keyword recognition samples for [C\#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer) and [Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer) (Android).  -
-**COVID-19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual verification testing as we normally do. We haven't made any changes we think could have broken anything, and our automated tests all passed. If we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Stay healthy!
-
-## Speech SDK 1.11.0: 2020-March release
-**New features**
-- Linux: Added support for Red Hat Enterprise Linux (RHEL)/CentOS 7 x64 with [instructions](./how-to-configure-rhel-centos-7.md) on how to configure the system for Speech SDK.-- Linux: Added support for .NET Core C# on Linux ARM32 and ARM64. Read more [here](./speech-sdk.md?tabs=linux).-- C#, C++: Added `UtteranceId` in `ConversationTranscriptionResult`, a consistent ID across all the intermediates and final speech recognition result. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.transcription.conversationtranscriptionresult), [C++](/cpp/cognitive-services/speech/transcription-conversationtranscriptionresult).-- Python: Added support for `Language ID`. See speech_sample.py in [GitHub repo](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/python/console).-- Windows: Added compressed audio input format support on Windows platform for all the win32 console applications. Details [here](./how-to-use-codec-compressed-audio-input-streams.md).-- JavaScript: Support speech synthesis (Text-to-Speech) in NodeJS. Learn more [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/node/text-to-speech).-- JavaScript: Add new API's to enable inspection of all send and received messages. Learn more [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript).-
-**Bug fixes**
-- C#, C++: Fixed an issue so `SendMessageAsync` now sends binary message as binary type. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.connection.sendmessageasync#Microsoft_CognitiveServices_Speech_Connection_SendMessageAsync_System_String_System_Byte___System_UInt32_), [C++](/cpp/cognitive-services/speech/connection).-- C#, C++: Fixed an issue where using `Connection MessageReceived` event may cause crash if `Recognizer` is disposed before `Connection` object. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.connection.messagereceived), [C++](/cpp/cognitive-services/speech/connection#messagereceived).-- Android: Audio buffer size from microphone decreased from 800ms to 100ms to improve latency.-- Android: Fixed an [issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/563) with x86 Android emulator in Android Studio.-- JavaScript: Added support for Regions in China with the `fromSubscription` API. Details [here](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#fromsubscription-string--string-).-- JavaScript: Add more error information for connection failures from NodeJS.-
-**Samples**
-- Unity: Intent recognition public sample is fixed, where LUIS json import was failing. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/369).-- Python: Sample added for `Language ID`. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py).-
-**Covid19 abridged testing:**
-Due to working remotely over the last few weeks, we couldn't do as much manual device verification testing as we normally do. For example, we couldn't test microphone input and speaker output on Linux, iOS, and macOS. We haven't made any changes we think could have broken anything on these platforms, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br>
-Thank you for your continued support. As always, please post questions or feedback on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen) or [Stack Overflow](https://stackoverflow.microsoft.com/questions/tagged/731).<br>
-Stay healthy!
-
-## Speech SDK 1.10.0: 2020-February release
-
-**New features**
-
- > [!NOTE]
- > Customers must configure OpenSSL according to [these instructions](./how-to-configure-openssl-linux.md).
-
-**Bug fixes**
--
- **Samples**
--
-**Other changes**
--
-## Speech SDK 1.9.0: 2020-January release
-
-**New Features**
--- Multi-device conversation: connect multiple devices to the same speech or text-based conversation, and optionally translate messages sent between them. Learn more in [this article](multi-device-conversation.md).-- Keyword recognition support added for Android .aar package and added support for x86 and x64 flavors.-- Objective-C: `SendMessage` and `SetMessageProperty` methods added to `Connection` object. See documentation [here](/objectivec/cognitive-services/speech/spxconnection).-- TTS C++ api now supports `std::wstring` as synthesis text input, removing the need to convert a wstring to string before passing it to the SDK. See details [here](/cpp/cognitive-services/speech/speechsynthesizer#speaktextasync).-- C#: [Language ID](./how-to-automatic-language-detection.md?pivots=programming-language-csharp) and [source language config](./how-to-specify-source-language.md?pivots=programming-language-csharp) are now available.-- JavaScript: Added a feature to `Connection` object to pass through custom messages from the Speech Service as callback `receivedServiceMessage`.-- JavaScript: Added support for `FromHost API` to ease use with on-prem containers and sovereign clouds. See documentation [here](speech-container-howto.md).-- JavaScript: We now honor `NODE_TLS_REJECT_UNAUTHORIZED` thanks to a contribution from [orgads](https://github.com/orgads). See details [here](https://github.com/microsoft/cognitive-services-speech-sdk-js/pull/75).-
-**Breaking changes**
--- `OpenSSL` has been updated to version 1.1.1b and is statically linked to the Speech SDK core library for Linux. This may cause a break if your inbox `OpenSSL` has not been installed to the `/usr/lib/ssl` directory in the system. Please check [our documentation](how-to-configure-openssl-linux.md) under Speech SDK docs to work around the issue.-- We have changed the data type returned for C# `WordLevelTimingResult.Offset` from `int` to `long` to allow for access to `WordLevelTimingResults` when speech data is longer than 2 minutes.-- `PushAudioInputStream` and `PullAudioInputStream` now send wav header information to the Speech Service based on `AudioStreamFormat`, optionally specified when they were created. Customers must now use the [supported audio input format](how-to-use-audio-input-streams.md). Any other formats will get suboptimal recognition results or may cause other issues.-
-**Bug fixes**
--- See the `OpenSSL` update under Breaking changes above. We fixed both an intermittent crash and a performance issue (lock contention under high load) in Linux and Java.-- Java: Made improvements to object closure in high concurrency scenarios.-- Restructured our NuGet package. We removed the three copies of `Microsoft.CognitiveServices.Speech.core.dll` and `Microsoft.CognitiveServices.Speech.extension.kws.dll` under lib folders, making the NuGet package smaller and faster to download, and we added headers needed to compile some C++ native apps.-- Fixed quickstart samples [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/cpp). These were exiting without displaying "microphone not found" exception on Linux, macOS, Windows.-- Fixed SDK crash with long speech recognition results on certain code paths like [this sample](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/csharp/uwp/speechtotext-uwp).-- Fixed SDK deployment error in Azure Web App environment to address [this customer issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/396).-- Fixed a TTS error while using multi `<voice>` tag or `<audio>` tag to address [this customer issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/433).-- Fixed a TTS 401 error when the SDK is recovered from suspended.-- JavaScript: Fixed a circular import of audio data thanks to a contribution from [euirim](https://github.com/euirim).-- JavaScript: added support for setting service properties, as added in 1.7.-- JavaScript: fixed an issue where a connection error could result in continuous, unsuccessful websocket reconnect attempts.-
-**Samples**
--- Added keyword recognition sample for Android [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java/android/sdkdemo).-- Added TTS sample for the server scenario [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_synthesis_server_scenario_sample.cs).-- Added Multi-device conversation quickstarts for C# and C++ [here](quickstarts/multi-device-conversation.md).-
-**Other changes**
--- Optimized SDK core library size on Android.-- SDK in 1.9.0 and onwards supports both `int` and `string` types in the voice signature version field for Conversation Transcriber.-
-## Speech SDK 1.8.0: 2019-November release
-
-**New Features**
--- Added a `FromHost()` API, to ease use with on-prem containers and sovereign clouds.-- Added Source Language Identification for Speech Recognition (in Java and C++)-- Added `SourceLanguageConfig` object for Speech Recognition, used to specify expected source languages (in Java and C++)-- Added `KeywordRecognizer` support on Windows (UWP), Android and iOS through the NuGet and Unity packages-- Added Remote Conversation Java API to do Conversation Transcription in asynchronous batches.-
-**Breaking changes**
--- Conversation Transcriber functionalities moved under namespace `Microsoft.CognitiveServices.Speech.Transcription`.-- Parts of the Conversation Transcriber methods are moved to new `Conversation` class.-- Dropped support for 32-bit (ARMv7 and x86) iOS-
-**Bug fixes**
--- Fix for crash if local `KeywordRecognizer` is used without a valid Speech service subscription key-
-**Samples**
--- Xamarin sample for `KeywordRecognizer`-- Unity sample for `KeywordRecognizer`-- C++ and Java samples for Automatic Source Language Identification.-
-## Speech SDK 1.7.0: 2019-September release
-
-**New Features**
--- Added beta support for Xamarin on Universal Windows Platform (UWP), Android, and iOS-- Added iOS support for Unity-- Added `Compressed` input support for ALaw, Mulaw, FLAC on Android, iOS and Linux-- Added `SendMessageAsync` in `Connection` class for sending a message to service-- Added `SetMessageProperty` in `Connection` class for setting property of a message-- TTS added bindings for Java (JRE and Android), Python, Swift, and Objective-C-- TTS added playback support for macOS, iOS, and Android.-- Added "word boundary" information for TTS.-
-**Bug fixes**
--- Fixed IL2CPP build issue on Unity 2019 for Android-- Fixed issue with malformed headers in wav file input being processed incorrectly-- Fixed issue with UUIDs not being unique in some connection properties-- Fixed a few warnings about nullability specifiers in the Swift bindings (might require small code changes)-- Fixed a bug that caused websocket connections to be closed ungracefully under network load-- Fixed an issue on Android that sometimes results in duplicate impression IDs used by `DialogServiceConnector`-- Improvements to the stability of connections across multi-turn interactions and the reporting of failures (via `Canceled` events) when they occur with `DialogServiceConnector`-- `DialogServiceConnector` session starts will now properly provide events, including when calling `ListenOnceAsync()` during an active `StartKeywordRecognitionAsync()`-- Addressed a crash associated with `DialogServiceConnector` activities being received-
-**Samples**
--- Quickstart for Xamarin-- Updated CPP Quickstart with Linux ARM64 information-- Updated Unity quickstart with iOS information-
-## Speech SDK 1.6.0: 2019-June release
-
-**Samples**
--- Quickstart samples for Text To Speech on UWP and Unity-- Quickstart sample for Swift on iOS-- Unity samples for Speech & Intent Recognition and Translation-- Updated quickstart samples for `DialogServiceConnector`-
-**Improvements / Changes**
--- Dialog namespace:
- - `SpeechBotConnector` has been renamed to `DialogServiceConnector`
- - `BotConfig` has been renamed to `DialogServiceConfig`
- - `BotConfig::FromChannelSecret()` has been remapped to `DialogServiceConfig::FromBotSecret()`
- - All existing Direct Line Speech clients continue to be supported after the rename
-- Update TTS REST adapter to support proxy, persistent connection-- Improve error message when an invalid region is passed-- Swift/Objective-C:
- - Improved error reporting: Methods that can result in an error are now present in two versions: One that exposes an `NSError` object for error handling, and one that raises an exception. The former are exposed to Swift. This change requires adaptations to existing Swift code.
- - Improved event handling
-
-**Bug fixes**
--- Fix for TTS: where `SpeakTextAsync` future returned without waiting until audio has completed rendering-- Fix for marshaling strings in C# to enable full language support-- Fix for .NET core app problem to load core library with net461 target framework in samples-- Fix for occasional issues to deploy native libraries to the output folder in samples-- Fix for web socket closing reliably-- Fix for possible crash while opening a connection under heavy load on Linux-- Fix for missing metadata in the framework bundle for macOS-- Fix for problems with `pip install --user` on Windows-
-## Speech SDK 1.5.1
-
-This is a bug fix release and only affecting the native/managed SDK. It is not affecting the JavaScript version of the SDK.
-
-**Bug fixes**
--- Fix FromSubscription when used with Conversation Transcription.-- Fix bug in keyword spotting for voice assistants.-
-## Speech SDK 1.5.0: 2019-May release
-
-**New features**
--- Keyword spotting (KWS) is now available for Windows and Linux. KWS functionality might work with any microphone type, official KWS support, however, is currently limited to the microphone arrays found in the Azure Kinect DK hardware or the Speech Devices SDK.-- Phrase hint functionality is available through the SDK. For more information, see [here](./get-started-speech-to-text.md).-- Conversation transcription functionality is available through the SDK. See [here](./conversation-transcription.md).-- Add support for voice assistants using the Direct Line Speech channel.-
-**Samples**
--- Added samples for new features or new services supported by the SDK.-
-**Improvements / Changes**
--- Added various recognizer properties to adjust service behavior or service results (like masking profanity and others).-- You can now configure the recognizer through the standard configuration properties, even if you created the recognizer `FromEndpoint`.-- Objective-C: `OutputFormat` property was added to `SPXSpeechConfiguration`.-- The SDK now supports Debian 9 as a Linux distribution.-
-**Bug fixes**
--- Fixed a problem where the speaker resource was destructed too early in Text-to-Speech.-
-## Speech SDK 1.4.2
-
-This is a bug fix release and only affecting the native/managed SDK. It is not affecting the JavaScript version of the SDK.
-
-## Speech SDK 1.4.1
-
-This is a JavaScript-only release. No features have been added. The following fixes were made:
--- Prevent web pack from loading https-proxy-agent.-
-## Speech SDK 1.4.0: 2019-April release
-
-**New features**
--- The SDK now supports the Text-to-Speech service as a beta version. It is supported on Windows and Linux Desktop from C++ and C#. For more information, check the [Text-to-Speech overview](text-to-speech.md#get-started).-- The SDK now supports MP3 and Opus/OGG audio files as stream input files. This feature is available only on Linux from C++ and C# and is currently in beta (more details [here](how-to-use-codec-compressed-audio-input-streams.md)).-- The Speech SDK for Java, .NET core, C++ and Objective-C have gained macOS support. The Objective-C support for macOS is currently in beta.-- iOS: The Speech SDK for iOS (Objective-C) is now also published as a CocoaPod.-- JavaScript: Support for non-default microphone as an input device.-- JavaScript: Proxy support for Node.js.-
-**Samples**
--- Samples for using the Speech SDK with C++ and with Objective-C on macOS have been added.-- Samples demonstrating the usage of the Text-to-Speech service have been added.-
-**Improvements / Changes**
--- Python: Additional properties of recognition results are now exposed via the `properties` property.-- For additional development and debug support, you can redirect SDK logging and diagnostics information into a log file (more details [here](how-to-use-logging.md)).-- JavaScript: Improve audio processing performance.-
-**Bug fixes**
--- Mac/iOS: A bug that led to a long wait when a connection to the Speech service could not be established was fixed.-- Python: improve error handling for arguments in Python callbacks.-- JavaScript: Fixed wrong state reporting for speech ended on RequestSession.-
-## Speech SDK 1.3.1: 2019-February refresh
-
-This is a bug fix release and only affecting the native/managed SDK. It is not affecting the JavaScript version of the SDK.
-
-**Bug fix**
--- Fixed a memory leak when using microphone input. Stream based or file input is not affected.-
-## Speech SDK 1.3.0: 2019-February release
-
-**New Features**
--- The Speech SDK supports selection of the input microphone through the `AudioConfig` class. This allows you to stream audio data to the Speech service from a non-default microphone. For more information, see the documentation describing [audio input device selection](how-to-select-audio-input-devices.md). This feature is not yet available from JavaScript.-- The Speech SDK now supports Unity in a beta version. Provide feedback through the issue section in the [GitHub sample repository](https://aka.ms/csspeech/samples). This release supports Unity on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86). More information is available in our [Unity quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=unity).-- The file `Microsoft.CognitiveServices.Speech.csharp.bindings.dll` (shipped in previous releases) isn't needed anymore. The functionality is now integrated into the core SDK.-
-**Samples**
-
-The following new content is available in our [sample repository](https://aka.ms/csspeech/samples):
--- Additional samples for `AudioConfig.FromMicrophoneInput`.-- Additional Python samples for intent recognition and translation.-- Additional samples for using the `Connection` object in iOS.-- Additional Java samples for translation with audio output.-- New sample for use of the [Batch Transcription REST API](batch-transcription.md).-
-**Improvements / Changes**
--- Python
- - Improved parameter verification and error messages in `SpeechConfig`.
- - Add support for the `Connection` object.
- - Support for 32-bit Python (x86) on Windows.
- - The Speech SDK for Python is out of beta.
-- iOS
- - The SDK is now built against the iOS SDK version 12.1.
- - The SDK now supports iOS versions 9.2 and later.
- - Improve reference documentation and fix several property names.
-- JavaScript
- - Add support for the `Connection` object.
- - Add type definition files for bundled JavaScript
- - Initial support and implementation for phrase hints.
- - Return properties collection with service JSON for recognition
-- Windows DLLs do now contain a version resource.-- If you create a recognizer `FromEndpoint` you can add parameters directly to the endpoint URL. Using `FromEndpoint` you can't configure the recognizer through the standard configuration properties.-
-**Bug fixes**
--- Empty proxy username and proxy password were not handled correctly. With this release, if you set proxy username and proxy password to an empty string, they will not be submitted when connecting to the proxy.-- SessionId's created by the SDK were not always truly random for some languages&nbsp;/ environments. Added random generator initialization to fix this issue.-- Improve handling of authorization token. If you want to use an authorization token, specify in the `SpeechConfig` and leave the subscription key empty. Then create the recognizer as usual.-- In some cases the `Connection` object wasn't released correctly. This issue has been fixed.-- The JavaScript sample was fixed to support audio output for translation synthesis also on Safari.-
-## Speech SDK 1.2.1
-
-This is a JavaScript-only release. No features have been added. The following fixes were made:
--- Fire end of stream at turn.end, not at speech.end.-- Fix bug in audio pump that did not schedule next send if the current send failed.-- Fix continuous recognition with auth token.-- Bug fix for different recognizer / endpoints.-- Documentation improvements.-
-## Speech SDK 1.2.0: 2018-December release
-
-**New Features**
--- Python
- - The Beta version of Python support (3.5 and above) is available with this release. For more information, see here](quickstart-python.md).
-- JavaScript
- - The Speech SDK for JavaScript has been open-sourced. The source code is available on [GitHub](https://github.com/Microsoft/cognitive-services-speech-sdk-js).
- - We now support Node.js, more info can be found [here](./get-started-speech-to-text.md).
- - The length restriction for audio sessions has been removed, reconnection will happen automatically under the cover.
-- `Connection` object
- - From the `Recognizer`, you can access a `Connection` object. This object allows you to explicitly initiate the service connection and subscribe to connect and disconnect events.
- (This feature is not yet available from JavaScript and Python.)
-- Support for Ubuntu 18.04.-- Android
- - Enabled ProGuard support during APK generation.
-
-**Improvements**
--- Improvements in the internal thread usage, reducing the number of threads, locks, mutexes.-- Improved error reporting / information. In several cases, error messages have not been propagated out all the way out.-- Updated development dependencies in JavaScript to use up-to-date modules.-
-**Bug fixes**
--- Fixed memory leaks due to a type mismatch in `RecognizeAsync`.-- In some cases exceptions were being leaked.-- Fixing memory leak in translation event arguments.-- Fixed a locking issue on reconnect in long running sessions.-- Fixed an issue that could lead to missing final result for failed translations.-- C#: If an `async` operation wasn't awaited in the main thread, it was possible the recognizer could be disposed before the async task was completed.-- Java: Fixed a problem resulting in a crash of the Java VM.-- Objective-C: Fixed enum mapping; RecognizedIntent was returned instead of `RecognizingIntent`.-- JavaScript: Set default output format to 'simple' in `SpeechConfig`.-- JavaScript: Removing inconsistency between properties on the config object in JavaScript and other languages.-
-**Samples**
--- Updated and fixed several samples (for example output voices for translation, etc.).-- Added Node.js samples in the [sample repository](https://aka.ms/csspeech/samples).-
-## Speech SDK 1.1.0
-
-**New Features**
--- Support for Android x86/x64.-- Proxy Support: In the `SpeechConfig` object, you can now call a function to set the proxy information (hostname, port, username, and password). This feature is not yet available on iOS.-- Improved error code and messages. If a recognition returned an error, this did already set `Reason` (in canceled event) or `CancellationDetails` (in recognition result) to `Error`. The canceled event now contains two additional members, `ErrorCode` and `ErrorDetails`. If the server returned additional error information with the reported error, it will now be available in the new members.-
-**Improvements**
--- Added additional verification in the recognizer configuration, and added additional error message.-- Improved handling of long-time silence in middle of an audio file.-- NuGet package: for .NET Framework projects, it prevents building with AnyCPU configuration.-
-**Bug fixes**
--- Fixed several exceptions found in recognizers. In addition, exceptions are caught and converted into `Canceled` event.-- Fix a memory leak in property management.-- Fixed bug in which an audio input file could crash the recognizer.-- Fixed a bug where events could be received after a session stop event.-- Fixed some race conditions in threading.-- Fixed an iOS compatibility issue that could result in a crash.-- Stability improvements for Android microphone support.-- Fixed a bug where a recognizer in JavaScript would ignore the recognition language.-- Fixed a bug preventing setting the `EndpointId` (in some cases) in JavaScript.-- Changed parameter order in AddIntent in JavaScript, and added missing `AddIntent` JavaScript signature.-
-**Samples**
--- Added C++ and C# samples for pull and push stream usage in the [sample repository](https://aka.ms/csspeech/samples).-
-## Speech SDK 1.0.1
-
-Reliability improvements and bug fixes:
--- Fixed potential fatal error due to race condition in disposing recognizer-- Fixed potential fatal error when unset properties occur.-- Added additional error and parameter checking.-- Objective-C: Fixed possible fatal error caused by name overriding in NSString.-- Objective-C: Adjusted visibility of API-- JavaScript: Fixed regarding events and their payloads.-- Documentation improvements.-
-In our [sample repository](https://aka.ms/csspeech/samples), a new sample for JavaScript was added.
-
-## Cognitive Services Speech SDK 1.0.0: 2018-September release
-
-**New features**
--- Support for Objective-C on iOS. Check out our [Objective-C quickstart for iOS](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/objectivec/ios/from-microphone).-- Support for JavaScript in browser. Check out our [JavaScript quickstart](./get-started-speech-to-text.md).-
-**Breaking changes**
--- With this release, a number of breaking changes are introduced.
- Check [this page](https://aka.ms/csspeech/breakingchanges_1_0_0) for details.
-
-## Cognitive Services Speech SDK 0.6.0: 2018-August release
-
-**New features**
--- UWP apps built with the Speech SDK now can pass the Windows App Certification Kit (WACK).
- Check out the [UWP quickstart](./get-started-speech-to-text.md?pivots=programming-language-chsarp&tabs=uwp).
-- Support for .NET Standard 2.0 on Linux (Ubuntu 16.04 x64).-- Experimental: Support Java 8 on Windows (64-bit) and Linux (Ubuntu 16.04 x64).
- Check out the [Java Runtime Environment quickstart](./get-started-speech-to-text.md?pivots=programming-language-java&tabs=jre).
-
-**Functional change**
--- Expose additional error detail information on connection errors.-
-**Breaking changes**
--- On Java (Android), the `SpeechFactory.configureNativePlatformBindingWithDefaultCertificate` function no longer requires a path parameter. Now the path is automatically detected on all supported platforms.-- The get-accessor of the property `EndpointUrl` in Java and C# was removed.-
-**Bug fixes**
--- In Java, the audio synthesis result on the translation recognizer is implemented now.-- Fixed a bug that could cause inactive threads and an increased number of open and unused sockets.-- Fixed a problem, where a long-running recognition could terminate in the middle of the transmission.-- Fixed a race condition in recognizer shutdown.-
-## Cognitive Services Speech SDK 0.5.0: 2018-July release
-
-**New features**
--- Support Android platform (API 23: Android 6.0 Marshmallow or higher). Check out the [Android quickstart](./get-started-speech-to-text.md?pivots=programming-language-java&tabs=android).-- Support .NET Standard 2.0 on Windows. Check out the [.NET Core quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnetcore).-- Experimental: Support UWP on Windows (version 1709 or later).
- - Check out the [UWP quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=uwp).
- - Note that UWP apps built with the Speech SDK do not yet pass the Windows App Certification Kit (WACK).
-- Support long-running recognition with automatic reconnection.-
-**Functional changes**
--- `StartContinuousRecognitionAsync()` supports long-running recognition.-- The recognition result contains more fields. They're offset from the audio beginning and duration (both in ticks) of the recognized text and additional values that represent recognition status, for example, `InitialSilenceTimeout` and `InitialBabbleTimeout`.-- Support AuthorizationToken for creating factory instances.-
-**Breaking changes**
--- Recognition events: `NoMatch` event type was merged into the `Error` event.-- SpeechOutputFormat in C# was renamed to `OutputFormat` to stay aligned with C++.-- The return type of some methods of the `AudioInputStream` interface changed slightly:
- - In Java, the `read` method now returns `long` instead of `int`.
- - In C#, the `Read` method now returns `uint` instead of `int`.
- - In C++, the `Read` and `GetFormat` methods now return `size_t` instead of `int`.
-- C++: Instances of audio input streams now can be passed only as a `shared_ptr`.-
-**Bug fixes**
--- Fixed incorrect return values in the result when `RecognizeAsync()` times out.-- The dependency on media foundation libraries on Windows was removed. The SDK now uses Core Audio APIs.-- Documentation fix: Added a [regions](regions.md) page to describe the supported regions.-
-**Known issue**
--- The Speech SDK for Android doesn't report speech synthesis results for translation. This issue will be fixed in the next release.-
-## Cognitive Services Speech SDK 0.4.0: 2018-June release
-
-**Functional changes**
--- AudioInputStream-
- A recognizer now can consume a stream as the audio source. For more information, see the related [how-to guide](how-to-use-audio-input-streams.md).
--- Detailed output format-
- When you create a `SpeechRecognizer`, you can request `Detailed` or `Simple` output format. The `DetailedSpeechRecognitionResult` contains a confidence score, recognized text, raw lexical form, normalized form, and normalized form with masked profanity.
-
-**Breaking change**
--- Changed to `SpeechRecognitionResult.Text` from `SpeechRecognitionResult.RecognizedText` in C#.-
-**Bug fixes**
--- Fixed a possible callback issue in the USP layer during shutdown.-- If a recognizer consumed an audio input file, it was holding on to the file handle longer than necessary.-- Removed several deadlocks between the message pump and the recognizer.-- Fire a `NoMatch` result when the response from service is timed out.-- The media foundation libraries on Windows are delay loaded. This library is required for microphone input only.-- The upload speed for audio data is limited to about twice the original audio speed.-- On Windows, C# .NET assemblies now are strong named.-- Documentation fix: `Region` is required information to create a recognizer.-
-More samples have been added and are constantly being updated. For the latest set of samples, see the [Speech SDK samples GitHub repository](https://aka.ms/csspeech/samples).
-
-## Cognitive Services Speech SDK 0.2.12733: 2018-May release
-
-This release is the first public preview release of the Cognitive Services Speech SDK.
+***
container-registry Container Registry Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-intro.md
Azure Container Registry is a managed, private Docker registry service based on
Use Azure container registries with your existing container development and deployment pipelines, or use Azure Container Registry Tasks to build container images in Azure. Build on demand, or fully automate builds with triggers such as source code commits and base image updates.
-For more about Docker and registry concepts, see the [Docker overview](https://docs.docker.com/engine/docker-overview/) and [About registries, repositories, and images](container-registry-concepts.md).
+Learn more about Docker and Registry concepts, see the [Docker overview](https://docs.docker.com/engine/docker-overview/), and [About registries, repositories, and images](container-registry-concepts.md).
## Use cases
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
As your schema evolves, and new properties are added over time, the analytical s
The following constraints are applicable on the operational data in Azure Cosmos DB when you enable analytical store to automatically infer and represent the schema correctly:
-* You can have a maximum of 1000 properties at any nesting level in the schema and a maximum nesting depth of 127.
+* You can have a maximum of 1000 properties across all nested levels in the document schema and a maximum nesting depth of 127.
* Only the first 1000 properties are represented in the analytical store. * Only the first 127 nested levels are represented in the analytical store. * The first level of a JSON document is its `/` root level.
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
description: Learn how to diagnose and fix slow requests when using Azure Cosmos
Previously updated : 06/15/2021 Last updated : 01/10/2022
try
// Log the diagnostics and add any additional info necessary to correlate to other logs Console.Write(response.Diagnostics.ToString()); }
-}catch(CosmosException cosmosException){
+}
+catch (CosmosException cosmosException)
+{
// Log the full exception including the stack trace Console.Write(cosmosException.ToString()); // The Diagnostics can be logged separately if required. Console.Write(cosmosException.Diagnostics.ToString()); }
+// When using Stream APIs
ResponseMessage response = await this.Container.CreateItemStreamAsync(partitionKey, stream);
-if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan || IsFailureStatusCode(response.StatusCode))
+if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan || !response.IsSuccessStatusCode)
{ // Log the diagnostics and add any additional info necessary to correlate to other logs Console.Write(response.Diagnostics.ToString());
The JSON structure has breaking changes with each version of the SDK. This makes
### <a name="cpu-history"></a>CPU history High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where the requests might do multiple connections for a single query.
+# [3.21 or greater SDK](#tab/cpu-new)
+
+The timeouts will contain *Diagnostics*, which contain:
+
+```json
+"systemHistory": [
+{
+"dateUtc": "2021-11-17T23:38:28.3115496Z",
+"cpu": 16.731,
+"memory": 9024120.000,
+"threadInfo": {
+"isThreadStarving": "False",
+....
+}
+
+},
+{
+"dateUtc": "2021-11-17T23:38:28.3115496Z",
+"cpu": 16.731,
+"memory": 9024120.000,
+"threadInfo": {
+"isThreadStarving": "False",
+....
+}
+
+},
+...
+]
+```
+
+* If the `cpu` values are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+
+# [Older SDK](#tab/cpu-old)
+ If the error contains `TransportException` information, it might contain also `CPU History`: ```
-CPU history:
-(2020-08-28T00:40:09.1769900Z 0.114),
-(2020-08-28T00:40:19.1763818Z 1.732),
-(2020-08-28T00:40:29.1759235Z 0.000),
-(2020-08-28T00:40:39.1763208Z 0.063),
-(2020-08-28T00:40:49.1767057Z 0.648),
-(2020-08-28T00:40:59.1689401Z 0.137),
+CPU history:
+(2020-08-28T00:40:09.1769900Z 0.114),
+(2020-08-28T00:40:19.1763818Z 1.732),
+(2020-08-28T00:40:29.1759235Z 0.000),
+(2020-08-28T00:40:39.1763208Z 0.063),
+(2020-08-28T00:40:49.1767057Z 0.648),
+(2020-08-28T00:40:59.1689401Z 0.137),
CPU count: 8) ```
-* If the CPU utilization is over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it or scale the machine to a larger resource size.
-* If the CPU measurements are not happening every 10 seconds, the gaps or measurement times indicate larger times in between measurements. In such a case, the cause is thread starvation. The solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+* If the CPU measurements are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the CPU measurements are not happening every 10 seconds (e.g., gaps or measurement times indicate larger times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+ #### Solution: The client application that uses the SDK should be scaled up or out.
databox-gateway Data Box Gateway 2105 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-gateway/data-box-gateway-2105-release-notes.md
+
+ Title: Azure Data Box Gateway 2105 release notes| Microsoft Docs
+description: Describes critical open issues and resolutions for the Azure Data Box Gateway running 2105 release.
++
+
+++ Last updated : 01/07/2022+++
+# Azure Data Box Gateway 2105 release notes
+
+The following release notes identify the critical open issues and the resolved issues for the 2105 release of Azure Data Box Gateway.
+
+The release notes are continuously updated. As critical issues that require a workaround are discovered, they are added. Before you deploy your Azure Data Box Gateway, carefully review the information in the release notes.
+
+This release corresponds to the software version:
+
+- **Data Box Gateway 2105 (1.6.1588.3220-42623-42265845)** - 4618211
+
+Update 2105 can be applied to all prior releases of Data Box Gateway.
+
+## What's new
+
+This release contains the following bug fix:
+
+- **Buffer overrun results in abrupt reboot of gateway** - This release fixes a bug that can cause a buffer overrun resulting in access of invalid memory, leading to an abrupt, unexpected reboot of the gateway device. The error can occur when a client accesses the last several bytes of a file whose data needs to be read back by the appliance from Azure, and the file size isn't a multiple of 4096 bytes.
+
+This release also contains the following updates:
+
+- All cumulative updates and .NET framework updates through April 2021.
+
+## Known issues in this release
+
+No new issues are release noted for this release. All the release noted issues have carried over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release).
+
+## Next steps
+
+- [Prepare to deploy Azure Data Box Gateway](data-box-gateway-deploy-prep.md)
defender-for-cloud Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/alerts-overview.md
Last updated 11/09/2021
Defender for Cloud generates alerts for resources deployed on your Azure, on-premises, and hybrid cloud environments.
-Security alerts are triggered by advanced detections and are available only with enhanced security features enabled. You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). A free 30-day trial is available. For pricing details in your currency of choice and according to your region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+Security alerts are triggered by advanced detections and are available only with enhanced security features enabled. You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). A free 30-day trial is available. For pricing details in your currency of choice and according to your region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## What are security alerts and security incidents?
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/alerts-reference.md
At the bottom of this page, there's a table describing the Microsoft Defender fo
| **Fileless Attack Behavior Detected**<br>(AppServices_FilelessAttackBehaviorDetection) | The memory of the process specified below contains behaviors commonly used by fileless attacks.<br>Specific behaviors include: {list of observed behaviors}<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium | | **Fileless Attack Technique Detected**<br>(AppServices_FilelessAttackTechniqueDetection) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software.<br>Specific behaviors include: {list of observed behaviors}<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High | | **Fileless Attack Toolkit Detected**<br>(AppServices_FilelessAttackToolkitDetection) | The memory of the process specified below contains a fileless attack toolkit: {ToolKitName}. Fileless attack toolkits typically do not have a presence on the filesystem, making detection by traditional anti-virus software difficult.<br>Specific behaviors include: {list of observed behaviors}<br>(Applies to: App Service on Windows and App Service on Linux) | Defense Evasion, Execution | High |
-| **Microsoft Defender for Cloud test alert for App Service (not a threat)**<br>(AppServices_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed.<br>(Applies to: App Service on Windows and App Service on Linux) | - | High |
+| **Microsoft Defender for Cloud test alert for App Service (not a threat)**<br>(AppServices_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed.<br>(Applies to: App Service on Windows and App Service on Linux) | - | High |
| **NMap scanning detected**<br>(AppServices_Nmap) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with NMAP. Attackers often use this tool for probing the web application to find vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium | | **Phishing content hosted on Azure Webapps**<br>(AppServices_PhishingContent) | URL used for phishing attack found on the Azure AppServices website. This URL was part of a phishing attack sent to Microsoft 365 customers. The content typically lures visitors into entering their corporate credentials or financial information into a legitimate looking website.<br>(Applies to: App Service on Windows and App Service on Linux) | Collection | High | | **PHP file in upload folder**<br>(AppServices_PhpInUploadFolder) | Azure App Service activity log indicates an access to a suspicious PHP page located in the upload folder.<br>This type of folder does not usually contain PHP files. The existence of this type of file might indicate an exploitation taking advantage of arbitrary file upload vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium | | **Possible Cryptocoinminer download detected**<br>(AppServices_CryptoCoinMinerDownload) | Analysis of host data has detected the download of a file normally associated with digital currency mining.<br>(Applies to: App Service on Linux) | Defense Evasion, Command and Control, Exploitation | Medium |
-| **Possible data exfiltration detected**<br>(AppServices_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised.<br>(Applies to: App Service on Linux) | Collection, Exfiltration | Medium |
+| **Possible data exfiltration detected**<br>(AppServices_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised.<br>(Applies to: App Service on Linux) | Collection, Exfiltration | Medium |
| **Potential dangling DNS record for an App Service resource detected**<br>(AppServices_PotentialDanglingDomain) | A DNS record that points to a recently deleted App Service resource (also known as "dangling DNS" entry) has been detected. This might leave you susceptible to a subdomain takeover. Subdomain takeovers enable malicious actors to redirect traffic intended for an organizationΓÇÖs domain to a site performing malicious activity. In this case, a text record with the Domain Verification ID was found. Such text records prevent subdomain takeover but we still recommend removing the dangling domain. If you leave the DNS record pointing at the subdomain youΓÇÖre at risk if anyone in your organization deletes the TXT file or record in the future.<br>(Applies to: App Service on Windows and App Service on Linux) | - | Low | | **Potential reverse shell detected**<br>(AppServices_ReverseShell) | Analysis of host data detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns.<br>(Applies to: App Service on Linux) | Exfiltration, Exploitation | Medium | | **Raw data download detected**<br>(AppServices_DownloadCodeFromWebsite) | Analysis of App Service processes detected an attempt to download code from raw-data websites such as Pastebin. This action was run by a PHP process. This behavior is associated with attempts to download web shells or other malicious components to the App Service.<br>(Applies to: App Service on Windows) | Execution | Medium |
defender-for-cloud Defender For App Service Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-app-service-introduction.md
To protect your Azure App Service plan with Microsoft Defender for App Service,
| Aspect | Details | ||:--| | Release state: | General availability (GA) |
-| Pricing: | Microsoft Defender for App Service is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)<br>Billing is according to total compute instances in all plans |
+| Pricing: | Microsoft Defender for App Service is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)<br>Billing is according to total compute instances in all plans |
| Supported App Service plans: | [All App Service plans](https://azure.microsoft.com/pricing/details/app-service/plans/) are supported except [Azure Functions on the consumption plan](../azure-functions/functions-scale.md). | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) | | | |
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-cloud-introduction.md
Use the advanced protection tiles in the [workload protections dashboard](worklo
- To get started with Defender for Cloud, you need a subscription to Microsoft Azure. If you don't have a subscription, [sign up for a free trial](https://azure.microsoft.com/free/). -- Defender for Cloud's free plan is enabled on all your current Azure subscriptions when you visit the Defender for Cloud pages in the Azure portal for the first time, or if enabled programmatically via the REST API. To take advantage of advanced security management and threat detection capabilities, you must enable the enhanced security features. These features are free for the first 30 days. [Learn more about the pricing](https://azure.microsoft.com/pricing/details/security-center/).
+- Defender for Cloud's free plan is enabled on all your current Azure subscriptions when you visit the Defender for Cloud pages in the Azure portal for the first time, or if enabled programmatically via the REST API. To take advantage of advanced security management and threat detection capabilities, you must enable the enhanced security features. These features are free for the first 30 days. [Learn more about the pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
- If you're ready to enable enhanced security features now, [Quickstart: Enable enhanced security features](enable-enhanced-security.md) walks you through the steps.
defender-for-cloud Defender For Container Registries Cicd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-container-registries-cicd.md
YouΓÇÖll get traceability information such as the GitHub workflow and the GitHub
|Aspect|Details| |-|:-| |Release state:| **This CI/CD integration is in preview.**<br>We recommend that you experiment with it on non-production workflows only.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| |||
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-container-registries-introduction.md
To protect the Azure Resource Manager based registries in your subscription, ena
|Aspect|Details| |-|:-| |Release state:|Generally available (GA)|
-|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Supported registries and images:|Linux images in ACR registries accessible from the public internet with shell access<br>[ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)| |Unsupported registries and images:|Windows images<br>'Private' registries (unless access is granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services))<br>Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images, or "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br>Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md)| |Required roles and permissions:|**Security reader** and [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md)|
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-containers-enable.md
Defender for Containers protects your clusters whether they're running in:
Learn about this plan in [Overview of Microsoft Defender for Containers](defender-for-containers-introduction.md). > [!NOTE] > Defender for Containers' support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is a preview feature. > > [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]- ::: zone pivot="defender-for-container-aks"-
-## Enable the plan
-
-1. From Defender for Cloud's menu, open the [Environment settings page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/EnvironmentSettings) and select the relevant subscription.
-
-1. In the [Defender plans page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/pricingTier), enable **Defender for Containers**
-
- > [!TIP]
- > If the subscription already has Defender for Kubernetes and/or Defender for container registries enabled, an update notice is shown. Otherwise, the only option will be **Defender for Containers**.
- >
- > :::image type="content" source="media/release-notes/defender-plans-deprecated-indicator.png" alt-text="Defender for container registries and Defender for Kubernetes plans showing 'Deprecated' and upgrade information.":::
-
-1. By default, the plan is configured to automatically defend any supported Kubernetes cluster that is attached to this subscription. To optionally modify the configuration, select *configure** from the configuration column.
-
- :::image type="content" source="media/defender-for-containers/defender-for-containers-provisioning-configuration.gif" alt-text="Viewing the configuration for Defender for Containers.":::
-
- You can also modify this configuration from the [Auto provisioning page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/dataCollection) on the **Microsoft Defender for Containers components (preview)** row:
-
- :::image type="content" source="media/defender-for-containers/auto-provisioning-defender-for-containers.png" alt-text="Screenshot of the auto provisioning options for Microsoft Defender for Containers." lightbox="./media/defender-for-containers/auto-provisioning-defender-for-containers.png":::
-
-1. If you disable the auto provisioning of any component, you can easily deploy the component to one or more clusters using the appropriate recommendation:
-
- - Policy Add-on for Kubernetes - [Azure Kubernetes Service clusters should have the Azure Policy Add-on for Kubernetes installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/08e628db-e2ed-4793-bc91-d13e684401c3)
- - Azure Kubernetes Service profile - [Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/56a83a6e-c417-42ec-b567-1e6fcb3d09a9)
- - Azure Arc-enabled Kubernetes extension - [Azure Arc-enabled Kubernetes clusters should have the Defender extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3ef9848c-c2c8-4ff3-8b9c-4c8eb8ddfce6)
--
-## Deploy the Defender profile
-
-You can enable the containers plan and deploy all of the relevant components from the Azure portal, the REST API, or with a Resource Manager template. For detailed steps, select the relevant tab.
-
-The Defender security profile is a preview feature. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
-
-### [**Azure portal**](#tab/aks-deploy-portal)
-
-### Use the fix button from the Defender for Cloud recommendation
-
-A streamlined, frictionless, process lets you use the Azure portal pages to enable the Defender for Cloud plan and setup auto provisioning of all the necessary components for defending your Kubernetes clusters at scale.
-
-A dedicated Defender for Cloud recommendation provides:
--- **Visibility** about which of your clusters has the Defender profile deployed-- **Fix** button to deploy it to those clusters without the extension-
-1. From Microsoft Defender for Cloud's recommendations page, open the **Enable enhanced security** security control.
-
-1. Use the filter to find the recommendation named **Azure Kubernetes Service clusters should have Defender profile enabled**.
-
- > [!TIP]
- > Notice the **Fix** icon in the actions column
-
-1. Select the clusters to see the details of the healthy and unhealthy resources - clusters with and without the profile.
-
-1. From the unhealthy resources list, select a cluster and select **Remediate** to open the pane with the remediation confirmation.
-
-1. Select **Fix *[x]* resources**.
--
-### [**REST API**](#tab/aks-deploy-rest)
-
-### Use the REST API to deploy the Defender profile
-
-To install the 'SecurityProfile' on an existing cluster with the REST API, run the following PUT command:
-
-```rest
-PUT https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
-```
-
-Request URI: `https://management.azure.com/subscriptions/{{SubscriptionId}}/resourcegroups/{{ResourceGroup}}/providers/Microsoft.ContainerService/managedClusters/{{ClusterName}}?api-version={{ApiVersion}}`
-
-Request query parameters:
-
-| Name | Description | Mandatory |
-|-||--|
-| SubscriptionId | Cluster's subscription ID | Yes |
-| ResourceGroup | Cluster's resource group | Yes |
-| ClusterName | Cluster's name | Yes |
-| ApiVersion | API version, must be >= 2021-07-01 | Yes |
-| | | |
-
-Request Body:
-
-```rest
-{
- "location": "{{Location}}",
- "properties": {
- "securityProfile": {
- "azureDefender": {
- "enabled": true,
- "logAnalyticsWorkspaceResourceId": "{{LAWorkspaceResourceId}}"
- }
- }
- }
-}
-```
-
-Request body parameters:
-
-| Name | Description | Mandatory |
-|--||--|
-| location | Cluster's location | Yes |
-| properties.securityProfile.azureDefender.enabled | Determines whether to enable or disable Microsoft Defender for Containers on the cluster | Yes |
-| properties.securityProfile.azureDefender.logAnalyticsWorkspaceResourceId | Log Analytics workspace Azure resource ID | Yes |
-| | | |
--
-### [**Resource Manager**](#tab/aks-deploy-arm)
-
-### Use Azure Resource Manager to deploy the Defender profile
-
-To use Azure Resource Manager to deploy the Defender profile, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces).
-
-> [!TIP]
-> If you're new to Resource Manager templates, start here: [What are Azure Resource Manager templates?](../azure-resource-manager/templates/overview.md)
-
-To install the 'SecurityProfile' on an existing cluster with Resource
-
-```
-{
- "type": "Microsoft.ContainerService/managedClusters",
- "apiVersion": "2021-07-01",
- "name": "string",
- "location": "string",
- "properties": {
- …
- "securityProfile": {
- "azureDefender": {
- "enabled": true,
- "logAnalyticsWorkspaceResourceId": ΓÇ£logAnalyticsWorkspaceResourceId "
- }
- },
- }
-}
-```
::: zone-end - ::: zone pivot="defender-for-container-arc"-
-## Enable the plan
-
-1. From Defender for Cloud's menu, open the [Environment settings page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/EnvironmentSettings) and select the relevant subscription.
-
-1. In the [Defender plans page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/pricingTier), enable **Defender for Containers**
-
- > [!TIP]
- > If the subscription already has Defender for Kubernetes and/or Defender for container registries enabled, an update notice is shown. Otherwise, the only option will be **Defender for Containers**.
- >
- > :::image type="content" source="media/release-notes/defender-plans-deprecated-indicator.png" alt-text="Defender for container registries and Defender for Kubernetes plans showing 'Deprecated' and upgrade information.":::
-
-1. By default, the plan is configured to automatically defend any supported Kubernetes cluster that is attached to this subscription. To optionally modify the configuration, select *configure** from the configuration column.
-
- :::image type="content" source="media/defender-for-containers/defender-for-containers-provisioning-configuration.gif" alt-text="Viewing the configuration for Defender for Containers.":::
-
- You can also modify this configuration from the [Auto provisioning page](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/dataCollection) on the **Microsoft Defender for Containers components (preview)** row:
-
- :::image type="content" source="media/defender-for-containers/auto-provisioning-defender-for-containers.png" alt-text="Screenshot of the auto provisioning options for Microsoft Defender for Containers." lightbox="./media/defender-for-containers/auto-provisioning-defender-for-containers.png":::
-
-1. If you disable the auto provisioning of any component, you can easily deploy the component to one or more clusters using the appropriate recommendation:
-
- - Policy Add-on for Kubernetes - [Azure Kubernetes Service clusters should have the Azure Policy Add-on for Kubernetes installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/08e628db-e2ed-4793-bc91-d13e684401c3)
- - Azure Kubernetes Service profile - [Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/56a83a6e-c417-42ec-b567-1e6fcb3d09a9)
- - Azure Arc-enabled Kubernetes extension - [Azure Arc-enabled Kubernetes clusters should have the Defender extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/3ef9848c-c2c8-4ff3-8b9c-4c8eb8ddfce6)
-
-## Prerequisites
-
-Before deploying the extension, ensure you:
-- [Connect the Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md)-- Complete the [pre-requisites listed under the generic cluster extensions documentation](../azure-arc/kubernetes/extensions.md#prerequisites).-- Configure **port 443** on the following endpoints for outbound access:
- - For clusters on Azure Government cloud:
- - *.ods.opinsights.azure.us
- - *.oms.opinsights.azure.us
- - :::no-loc text="login.microsoftonline.us":::
- - For clusters on other Azure cloud deployments:
- - *.ods.opinsights.azure.com
- - *.oms.opinsights.azure.com
- - :::no-loc text="login.microsoftonline.com":::
-
-## Deploy the Defender extension
-
-You can deploy the Defender extension using a range of methods. For detailed steps, select the relevant tab.
-
-### [**Azure portal**](#tab/k8s-deploy-asc)
-
-### Use the fix button from the Defender for Cloud recommendation
-
-A dedicated Defender for Cloud recommendation provides:
--- **Visibility** about which of your clusters has the Defender for Kubernetes extension deployed-- **Fix** button to deploy it to those clusters without the extension-
-1. From Microsoft Defender for Cloud's recommendations page, open the **Enable enhanced security** security control.
-
-1. Use the filter to find the recommendation named **Azure Arc-enabled Kubernetes clusters should have Defender for Cloud's extension installed**.
-
- :::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-recommendation.png" alt-text="Microsoft Defender for Cloud's recommendation for deploying the Defender extension for Azure Arc-enabled Kubernetes clusters." lightbox="media/defender-for-kubernetes-azure-arc/extension-recommendation.png":::
-
- > [!TIP]
- > Notice the **Fix** icon in the actions column
-
-1. Select the extension to see the details of the healthy and unhealthy resources - clusters with and without the extension.
-
-1. From the unhealthy resources list, select a cluster and select **Remediate** to open the pane with the remediation options.
-
-1. Select the relevant Log Analytics workspace and select **Remediate x resource**.
-
- :::image type="content" source="media/defender-for-kubernetes-azure-arc/security-center-deploy-extension.gif" alt-text="Deploy Defender extension for Azure Arc with Defender for Cloud's 'fix' option.":::
--
-### [**Azure CLI**](#tab/k8s-deploy-cli)
-
-### Use Azure CLI to deploy the Defender extension
-
-1. Log in to Azure:
-
- ```azurecli
- az login
- az account set --subscription <your-subscription-id>
- ```
-
- > [!IMPORTANT]
- > Ensure that you use the same subscription ID for ``<your-subscription-id>`` as the one that was used when connecting your cluster to Azure Arc.
-
-1. Run the following command to deploy the extension on top of your Azure Arc-enabled Kubernetes cluster:
-
- ```azurecli
- az k8s-extension create --name microsoft.azuredefender.kubernetes --cluster-type connectedClusters --cluster-name <cluster-name> --resource-group <resource-group> --extension-type microsoft.azuredefender.kubernetes
- ```
-
- A description of all the supported configuration settings on the Defender extension type is given below:
-
- | Property | Description |
- |-|-|
- | logAnalyticsWorkspaceResourceID | **Optional**. Full resource ID of your own Log Analytics workspace.<br>When not provided, the default workspace of the region will be used.<br><br>To get the full resource ID, run the following command to display the list of workspaces in your subscriptions in the default JSON format:<br>```az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json```<br><br>The Log Analytics workspace resource ID has the following syntax:<br>/subscriptions/{your-subscription-id}/resourceGroups/{your-resource-group}/providers/Microsoft.OperationalInsights/workspaces/{your-workspace-name}. <br>Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces) |
- | auditLogPath |**Optional**. The full path to the audit log files.<br>When not provided, the default path ``/var/log/kube-apiserver/audit.log`` will be used.<br>For AKS Engine, the standard path is ``/var/log/kubeaudit/audit.log`` |
-
- The below command shows an example usage of all optional fields:
-
- ```azurecli
- az k8s-extension create --name microsoft.azuredefender.kubernetes --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --extension-type microsoft.azuredefender.kubernetes --configuration-settings logAnalyticsWorkspaceResourceID=<log-analytics-workspace-resource-id> auditLogPath=<your-auditlog-path>
- ```
-
-### [**Resource Manager**](#tab/k8s-deploy-resource-manager)
-
-### Use Azure Resource Manager to deploy the Defender extension
-
-To use Azure Resource Manager to deploy the Defender extension, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces).
-
-You can use the **azure-defender-extension-arm-template.json** Resource Manager template from Defender for Cloud's [installation examples](https://aka.ms/kubernetes-extension-installation-examples).
-
-> [!TIP]
-> If you're new to Resource Manager templates, start here: [What are Azure Resource Manager templates?](../azure-resource-manager/templates/overview.md)
-
-### [**REST API**](#tab/k8s-deploy-api)
-
-### Use REST API to deploy the Defender extension
-
-To use the REST API to deploy the Defender extension, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces).
-
-> [!TIP]
-> The simplest way to use the API to deploy the Defender extension is with the supplied **Postman Collection JSON** example from Defender for Cloud's [installation examples](https://aka.ms/kubernetes-extension-installation-examples).
-- To modify the Postman Collection JSON, or to manually deploy the extension with the REST API, run the following PUT command:-
- ```rest
- PUT https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
- ```
-
- Where:
-
- | Name | In | Required | Type | Description |
- |--||-|--|-|
- | Subscription ID | Path | True | String | Your Azure Arc-enabled Kubernetes resource's subscription ID |
- |Resource Group | Path | True | String | Name of the resource group containing your Azure Arc-enabled Kubernetes resource |
- | Cluster Name | Path | True | String | Name of your Azure Arc-enabled Kubernetes resource |
- | | | | | |
--
- For **Authentication**, your header must have a Bearer token (as with other Azure APIs). To get a bearer token, run the following command:
-
- `az account get-access-token --subscription <your-subscription-id>`
- Use the following structure for the body of your message:
- ```json
- {
- "properties": {
- "extensionType": "microsoft.azuredefender.kubernetes",
- "con figurationSettings":ΓÇ»{
- "logAnalytics.workspaceId":"YOUR-WORKSPACE-ID"
- // , "auditLogPath":"PATH/TO/AUDITLOG"
- },
- "configurationProtectedSettings": {
- "logAnalytics.key":"YOUR-WORKSPACE-KEY"
- }
- }
- }
- ```
-
- Description of the properties is given below:
-
- | Property | Description |
- | -- | -- |
- | logAnalytics.workspaceId | Workspace ID of the Log Analytics resource |
- | logAnalytics.key | Key of the Log Analytics resource |
- | auditLogPath | **Optional**. The full path to the audit log files. The default value is ``/var/log/kube-apiserver/audit.log`` |
---
-## Verify the deployment
-
-To verify that your cluster has the Defender extension installed on it, follow the steps in one of the tabs below:
-
-### [**Azure portal - Defender for Cloud**](#tab/k8s-verify-asc)
-
-### Use Defender for Cloud recommendation to verify the status of your extension
-
-1. From Microsoft Defender for Cloud's recommendations page, open the **Enable Microsoft Defender for Cloud** security control.
-
-1. Select the recommendation named **Azure Arc-enabled Kubernetes clusters should have Microsoft Defender for Cloud's extension installed**.
-
- :::image type="content" source="./media/defender-for-kubernetes-azure-arc/extension-recommendation.png" alt-text="Microsoft Defender for Cloud's recommendation for deploying the Defender extension for Azure Arc-enabled Kubernetes clusters." lightbox="media/defender-for-kubernetes-azure-arc/extension-recommendation.png":::
-
-1. Check that the cluster on which you deployed the extension is listed as **Healthy**.
--
-### [**Azure portal - Azure Arc**](#tab/k8s-verify-arc)
-
-### Use the Azure Arc pages to verify the status of your extension
-
-1. From the Azure portal, open **Azure Arc**.
-1. From the infrastructure list, select **Kubernetes clusters** and then select the specific cluster.
-1. Open the extensions page. The extensions on the cluster are listed. To confirm whether the Defender extension was installed correctly, check the **Install status** column.
-
- :::image type="content" source="./media/defender-for-kubernetes-azure-arc/extension-installed-clusters-page.png" alt-text="Azure Arc page for checking the status of all installed extensions on a Kubernetes cluster." lightbox="media/defender-for-kubernetes-azure-arc/extension-installed-clusters-page.png":::
-
-1. For more details, select the extension.
-
- :::image type="content" source="./media/defender-for-kubernetes-azure-arc/extension-details-page.png" alt-text="Full details of an Azure Arc extension on a Kubernetes cluster.":::
--
-### [**Azure CLI**](#tab/k8s-verify-cli)
-
-### Use Azure CLI to verify that the extension is deployed
-
-1. Run the following command on Azure CLI:
-
- ```azurecli
- az k8s-extension show --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --name microsoft.azuredefender.kubernetes
- ```
-
-1. In the response, look for "extensionType": "microsoft.azuredefender.kubernetes" and "installState": "Installed".
-
- > [!NOTE]
- > It might show "installState": "Pending" for the first few minutes.
-
-1. If the state shows **Installed**, run the following command on your machine with the `kubeconfig` file pointed to your cluster to check that a pod called "azuredefender-XXXXX" is in 'Running' state:
-
- ```console
- kubectl get pods -n azuredefender
- ```
-
-### [**REST API**](#tab/k8s-verify-api)
-
-### Use the REST API to verify that the extension is deployed
-
-To confirm a successful deployment, or to validate the status of your extension at any time:
-
-1. Run the following GET command:
-
- ```rest
- GET https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
- ```
-
-1. In the response, look in "extensionType": "microsoft.azuredefender.kubernetes" for "installState": "Installed".
-
- > [!TIP]
- > It might show "installState": "Pending" for the first few minutes.
-
-1. If the state shows **Installed**, run the following command on your machine with the `kubeconfig` file pointed to your cluster to check that a pod called "azuredefender-XXXXX" is in 'Running' state:
-
- ```console
- kubectl get pods -n azuredefender
- ```
---- ::: zone-end -- ::: zone pivot="defender-for-container-eks"-
-## Protect Amazon Elastic Kubernetes Service clusters
-
-> [!IMPORTANT]
-> If you haven't already connected an AWS account, do so now using the instructions in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) and skip to step 3 below.
-
-To protect your EKS clusters, enable the Containers plan on the relevant account connector:
-
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the AWS connector.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/select-aws-connector.png" alt-text="Screenshot of Defender for Cloud's environment settings page showing an AWS connector.":::
-
-1. Set the toggle for the **Containers** plan to **On**.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/enable-containers-plan-on-aws-connector.png" alt-text="Screenshot of enabling Defender for Containers for an AWS connector.":::
-
-1. Optionally, to change the retention period for your audit logs, select **Configure**, enter the required timeframe, and select **Save**.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/adjust-eks-logs-retention.png" alt-text="Screenshot of adjusting the retention period for EKS control pane logs." lightbox="./media/defender-for-kubernetes-intro/adjust-eks-logs-retention.png":::
-
-1. Continue through the remaining pages of the connector wizard.
-
-1. Azure Arc-enabled Kubernetes and the Defender extension should be installed and running on your EKS clusters. A dedicated Defender for Cloud recommendation deploys the extension (and Arc if necessary):
-
- 1. From Defender for Cloud's **Recommendations** page, search for **EKS clusters should have Azure Defender's extension for Azure Arc installed**.
- 1. Select an unhealthy cluster.
-
- > [!IMPORTANT]
- > You must select the clusters one at a time.
- >
- > Don't select the clusters by their hyperlinked names: select anywhere else in the relevant row.
-
- 1. Select **Fix**.
- 1. Defender for Cloud generates a script in the language of your choice: select Bash (for Linux) or PowerShell (for Windows).
- 1. Select **Download remediation logic**.
- 1. Run the generated script on your cluster.
-
- :::image type="content" source="./media/defender-for-kubernetes-intro/generate-script-defender-extension-kubernetes.gif" alt-text="Video of how to use the Defender for Cloud recommendation to generate a script for your EKS clusters that enables the Azure Arc extension. ":::
-
-### View recommendations and alerts for your EKS clusters
-
-> [!TIP]
-> You can simulate container alerts by following the instructions in [this blog post](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-demonstrate-the-new-containers-features-in-azure-security/ba-p/1011270).
-
-To view the alerts and recommendations for your EKS clusters, use the filters on the alerts, recommendations, and inventory pages to filter by resource type **AWS EKS cluster**.
--- ::: zone-end - ## Simulate security alerts from Microsoft Defender for Containers A full list of supported alerts is available in the [reference table of all Defender for Cloud security alerts](alerts-reference.md#alerts-k8scluster).
A full list of supported alerts is available in the [reference table of all Defe
:::image type="content" source="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png" alt-text="Sample alert from Microsoft Defender for Kubernetes." lightbox="media/defender-for-kubernetes-azure-arc/sample-kubernetes-security-alert.png"::: -------
-## Remove the Defender extension
-
-You can remove the extension using Azure portal, Azure CLI, or REST API as explained in the tabs below.
-
-### [**Azure portal - Arc**](#tab/k8s-remove-arc)
-
-### Use Azure portal to remove the extension
-
-1. From the Azure portal, open Azure Arc.
-1. From the infrastructure list, select **Kubernetes clusters** and then select the specific cluster.
-1. Open the extensions page. The extensions on the cluster are listed.
-1. Select the cluster and select **Uninstall**.
-
- :::image type="content" source="media/defender-for-kubernetes-azure-arc/extension-uninstall-clusters-page.png" alt-text="Removing an extension from your Arc-enabled Kubernetes cluster." lightbox="media/defender-for-kubernetes-azure-arc/extension-uninstall-clusters-page.png":::
-
-### [**Azure CLI**](#tab/k8s-remove-cli)
-
-### Use Azure CLI to remove the Defender extension
-
-1. Remove the Microsoft Defender for Kubernetes Arc extension with the following commands:
-
- ```azurecli
- az login
- az account set --subscription <subscription-id>
- az k8s-extension delete --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --name microsoft.azuredefender.kubernetes --yes
- ```
-
- Removing the extension may take a few minutes. We recommend you wait before you try to verify that it was successful.
-
-1. To verify that the extension was successfully removed, run the following commands:
-
- ```azurecli
- az k8s-extension show --cluster-type connectedClusters --cluster-name <your-connected-cluster-name> --resource-group <your-rg> --name microsoft.azuredefender.kubernetes
- ```
-
- There should be no delay in the extension resource getting deleted from Azure Resource Manager. After that, validate that there are no pods called "azuredefender-XXXXX" on the cluster by running the following command with the `kubeconfig` file pointed to your cluster:
-
- ```console
- kubectl get pods -n azuredefender
- ```
-
- It might take a few minutes for the pods to be deleted.
-
-### [**REST API**](#tab/k8s-remove-api)
-
-### Use REST API to remove the Defender extension
-
-To remove the extension using the REST API, run the following DELETE command:
-
-```rest
-DELETE https://management.azure.com/subscriptions/{{Subscription Id}}/resourcegroups/{{Resource Group}}/providers/Microsoft.Kubernetes/connectedClusters/{{Cluster Name}}/providers/Microsoft.KubernetesConfiguration/extensions/microsoft.azuredefender.kubernetes?api-version=2020-07-01-preview
-```
-
-| Name | In | Required | Type | Description |
-|--||-|--|-|
-| Subscription ID | Path | True | String | Your Azure Arc-enabled Kubernetes cluster's subscription ID |
-| Resource Group | Path | True | String | Your Azure Arc-enabled Kubernetes cluster's resource group |
-| Cluster Name | Path | True | String | Your Azure Arc-enabled Kubernetes cluster's name |
-||||||
-
-For **Authentication**, your header must have a Bearer token (as with other Azure APIs). To get a bearer token, run the following command:
-
-```azurecli
-az account get-access-token --subscription <your-subscription-id>
-```
-
-The request may take several minutes to complete.
--- ::: zone-end ---------- ::: zone pivot="defender-for-container-aks"-
-## Remove the Defender profile
-
-You can remove the profile using the REST API or a Resource Manager template as explained in the tabs below.
-
-### [**REST API**](#tab/aks-removeprofile-api)
-
-### Use REST API to remove the Defender profile from AKS
-
-To remove the profile using the REST API, run the following PUT command:
-
-```rest
-https://management.azure.com/subscriptions/{{SubscriptionId}}/resourcegroups/{{ResourceGroup}}/providers/Microsoft.ContainerService/managedClusters/{{ClusterName}}?api-version={{ApiVersion}}
-```
-
-| Name | Description | Mandatory |
-|-||--|
-| SubscriptionId | Cluster's subscription ID | Yes |
-| ResourceGroup | Cluster's resource group | Yes |
-| ClusterName | Cluster's name | Yes |
-| ApiVersion | API version, must be >= 2021-07-01 | Yes |
-| | | |
-
-Request body:
-
-```rest
-{
- "location": "{{Location}}",
- "properties": {
- "securityProfile": {
- "azureDefender": {
- "enabled": false
- }
- }
- }
-}
-```
-
-Request body parameters:
-
-| Name | Description | Mandatory |
-|--||--|
-| location | Cluster's location | Yes |
-| properties.securityProfile.azureDefender.enabled | Determines whether to enable or disable Microsoft Defender for Containers on the cluster | Yes |
-| | | |
--
-### [**Resource Manager**](#tab/aks-removeprofile-resource-manager)
-
-### Use Azure Resource Manager to remove the Defender profile from AKS
-
-To use Azure Resource Manager to remove the Defender profile, you'll need a Log Analytics workspace on your subscription. Learn more in [Log Analytics workspaces](../azure-monitor/logs/data-platform-logs.md#log-analytics-and-workspaces).
-
-> [!TIP]
-> If you're new to Resource Manager templates, start here: [What are Azure Resource Manager templates?](../azure-resource-manager/templates/overview.md)
-
-The relevant template and parameters to remove the Defender profile from AKS are:
-
-```
-{
- "type": "Microsoft.ContainerService/managedClusters",
- "apiVersion": "2021-07-01",
- "name": "string",
- "location": "string",
- "properties": {
- …
- "securityProfile": {
- "azureDefender": {
- "enabled": false
- }
- },
- }
-}
-```
---
-## Remove the Defender extension
-
-If you've tried the Defender extension and decided not to use it, or you're troubleshooting a problem and need to uninstall then reinstall, follow the procedure described in [Remove the add-on](../governance/policy/concepts/policy-for-kubernetes.md#remove-the-add-on).
--- ::: zone-end-
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-containers-introduction.md
On this page, you'll learn how how you can use Defender for Containers to improv
| Aspect | Details | ||:| | Release state: | General availability (GA)<br>Where indicated, specific features are in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] |
-| Pricing: | **Microsoft Defender for Containers** is free for the month of December 2021. After that, it will be billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/) (which will be updated at the end of December 2021) |
-| Registries and images: | **Supported**<br> ΓÇó Linux images in Azure Container Registry (ACR) registries accessible from the public internet with shell access<br> ΓÇó Private registries with access granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services)<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)<br><br>**Unsupported**<br> ΓÇó Windows images<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md)<br> ΓÇó Nodes with taints applied |
-| Kubernetes distributions: | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br><br>**Tested on**<br> ΓÇó [Azure Kubernetes Service](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
+| Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
+| Registries and images: | **Supported**<br> ΓÇó Linux images in Azure Container Registry (ACR) registries accessible from the public internet with shell access<br> ΓÇó Private registries with access granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services)<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)<br><br>**Unsupported**<br> ΓÇó Windows images<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
+| Kubernetes distributions and configurations: | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br><br>**Tested on**<br> ΓÇó [Azure Kubernetes Service](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
| Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) (Except for preview features)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview) | | | |
defender-for-cloud Defender For Databases Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-databases-introduction.md
Defender for Cloud detects anomalous activities indicating unusual and potential
| Aspect | Details | ||:-| | Release state: | General availability (GA) |
-| Pricing: | **Microsoft Defender for open-source relational databases** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/) |
+| Pricing: | **Microsoft Defender for open-source relational databases** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
| Protected versions of PostgreSQL: | Single Server - General Purpose and Memory Optimized. Learn more in [PostgreSQL pricing tiers](../postgresql/concepts-pricing-tiers.md). | | Protected versions of MySQL: | Single Server - General Purpose and Memory Optimized. Learn more in [MySQL pricing tiers](../mysql/concepts-pricing-tiers.md). | | Protected versions of MariaDB: | General Purpose and Memory Optimized. Learn more in [MariaDB pricing tiers](../mariadb/concepts-pricing-tiers.md). |
defender-for-cloud Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-dns-introduction.md
From within Azure DNS, Defender for DNS monitors the queries from these resource
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for DNS** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for DNS** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government| |||
defender-for-cloud Defender For Key Vault Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-key-vault-introduction.md
Enable **Microsoft Defender for Key Vault** for Azure-native, advanced threat pr
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for Key Vault** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for Key Vault** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| |||
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Host-level threat detection for your Linux AKS nodes is available if you enable
|Aspect|Details| |-|:-| |Release state:|General availability (GA)<br>Protections for EKS clusters are preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|**Microsoft Defender for Kubernetes** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).<br>**Containers plan** for EKS clusters in connected AWS accounts is free while it's in preview.|
+|Pricing:|**Microsoft Defender for Kubernetes** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).<br>**Containers plan** for EKS clusters in connected AWS accounts is free while it's in preview.|
|Required roles and permissions:|**Security admin** can dismiss alerts.<br>**Security reader** can view findings.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview)| |||
defender-for-cloud Defender For Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-resource-manager-introduction.md
Microsoft Defender for Resource Manager automatically monitors the resource mana
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for Resource Manager** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for Resource Manager** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet| |||
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-sql-introduction.md
Microsoft Defender for SQL includes two Microsoft Defender plans that extend Mic
|Aspect|Details| |-|:-| |Release state:|**Microsoft Defender for Azure SQL database servers** - Generally available (GA)<br>**Microsoft Defender for SQL servers on machines** - Generally available (GA) |
-|Pricing:|The two plans that form **Microsoft Defender for SQL** are billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|The two plans that form **Microsoft Defender for SQL** are billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Protected SQL versions:|[SQL on Azure virtual machines](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md)<br>[SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>On-premises SQL servers on Windows machines without Azure Arc<br>Azure SQL [single databases](../azure-sql/database/single-database-overview.md) and [elastic pools](../azure-sql/database/elastic-pool-overview.md)<br>[Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md)<br>[Azure Synapse Analytics (formerly SQL DW) dedicated SQL pool](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet (**Partial**: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.)| |||
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-sql-usage.md
You'll see alerts when there are suspicious database activities, potential vulne
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Protected SQL versions:|SQL Server (versions currently [supported by Microsoft](/mem/configmgr/core/plan-design/configs/support-for-sql-server-versions))| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet| |||
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-storage-introduction.md
Security alerts are triggered when anomalous activities occur. These alerts appe
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/)|
+|Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
|Protected storage types:|[Blob Storage](https://azure.microsoft.com/services/storage/blobs/)<br>[Azure Files](../storage/files/storage-files-introduction.md)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet| |||
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/enable-data-collection.md
Title: Auto-deploy agents for Microsoft Defender for Cloud | Microsoft Docs description: This article describes how to set up auto provisioning of the Log Analytics agent and other agents and extensions used by Microsoft Defender for Cloud Previously updated : 12/26/2021 Last updated : 01/10/2022 # Configure auto provisioning for agents and extensions from Microsoft Defender for Cloud
Microsoft Defender for Cloud collects data from your resources using the relevant agent or extensions for that resource and the type of data collection you've enabled. Use the procedures below to ensure your resources have the necessary agents and extensions used by Defender for Cloud. +
+> [!NOTE]
+> When you enable auto provisioning of any of the supported extensions, you'll potentially impact *existing* and *future* machines. But when you **disable** auto provisioning for an extension, you'll only affect the *future* machines: nothing is uninstalled by disabling auto provisioning.
+ ## Prerequisites To get started with Defender for Cloud, you must have a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/). ## Availability
-| Aspect | Details |
-||:--|
-| Release state: | **Feature**: Auto provisioning is generally available (GA)<br>**Agent and extensions**: Log Analytics agent for Azure VMs is GA, Microsoft Dependency agent is in preview, Policy Add-on for Kubernetes is GA, Guest Configuration agent is preview |
-| Pricing: | Free |
-| Required roles and permissions: | [Owner](../role-based-access-control/built-in-roles.md#owner) to enable auto provisioning of the Log Analytics agent|
-| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines<br>:::image type="icon" source="./media/icons/no-icon.png"::: Kubernetes nodes<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual Machine Scale Sets |
-| Clouds: | **Feature**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet<br>**Agent and extensions**:<br>Log Analytics agent for Azure VMs is available on all clouds, Policy Add-on for Kubernetes is available on all clouds, Guest Configuration agent is only available on commercial clouds |
-| | |
+### [**Auto provisioning**](#tab/autoprovision-feature)
+
+This table shows the availability details for the auto provisioning **feature** itself.
+
+| Aspect | Details |
+||:|
+| Release state: | Auto provisioning is generally available (GA) |
+| Pricing: | Auto provisioning is free to use |
+| Required roles and permissions: | Depends on the specific extension - see relevant tab |
+| Supported destinations: | Depends on the specific extension - see relevant tab |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet |
++
+### [**Log Analytics agent**](#tab/autoprovision-loganalytic)
+
+| Aspect | Azure virtual machines | Azure Arc-enabled machines |
+||:|:--|
+| Release state: | Generally available (GA) | Preview |
+| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) | [Microsoft Defender for servers](defender-for-servers-introduction.md)<br>[Microsoft Defender for SQL](defender-for-sql-introduction.md) |
+| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines |
+| Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+
+### [**Vulnerability assessment**](#tab/autoprovision-va)
+
+| Aspect | Details |
+||:--|
+| Release state: | Generally available (GA) |
+| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines |
+| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+
+### [**Defender for Endpoint**](#tab/autoprovision-defendpoint)
+
+| Aspect | Linux | Windows |
+||:--|:-|
+| Release state: | Generally available (GA) | Generally available (GA) |
+| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md) | [Microsoft Defender for servers](defender-for-servers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) | [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Security Admin](../role-based-access-control/built-in-roles.md#security-admin) |
+| Supported destinations: | :::image type="icon" source="./medi), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops (EVD)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 10 (other than EVD or WVD) |
+| Policy-based: | :::image type="icon" source="./media/icons/no-icon.png"::: No | :::image type="icon" source="./media/icons/no-icon.png"::: No |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet |
++
+### [**Guest Configuration**](#tab/autoprovision-guestconfig)
+
+| Aspect | Details |
+||:--|
+| Release state: | Preview |
+| Relevant Defender plan: | [Microsoft Defender for servers](defender-for-servers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet |
+
+### [**Defender for Containers**](#tab/autoprovision-containers)
+
+This table shows the availability details for the various components that can be auto provisioned to provide the protections offered by [Microsoft Defender for Containers](defender-for-containers-introduction.md).
+
+| Aspect | Azure Kubernetes Service clusters | Azure Arc-enabled Kubernetes clusters |
+||-||
+| Release state: | ΓÇó Defender profile is in preview<br> ΓÇó Azure Policy add-on is generally available (GA) | ΓÇó Defender extension is in preview<br> ΓÇó Azure Policy extension for Azure Arc is in preview |
+| Relevant Defender plan: | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | [Microsoft Defender for Containers](defender-for-containers-introduction.md) |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+| Supported destinations: | Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers | Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters |
+| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
+| Clouds: | **Defender profile**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy add-on**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet|**Defender extension**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy extension for Azure Arc**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet|
+++
+> [!TIP]
+> For items marked in preview: [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
## How does Defender for Cloud collect data?
Selecting a data collection tier in Microsoft Defender for Cloud only affects th
### Requirements The enhanced security protections of Defender for Cloud are required for storing Windows security event data. Learn more about [the enhanced protection plans](defender-for-cloud-introduction.md).
-Storing data in Log Analytics might incur additional charges for data storage. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+Storing data in Log Analytics might incur additional charges for data storage. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
### Information for Microsoft Sentinel users Users of Microsoft Sentinel: note that security events collection within the context of a single workspace can be configured from either Microsoft Defender for Cloud or Microsoft Sentinel, but not both. If you're planning to add Microsoft Sentinel to a workspace that is already getting alerts from Microsoft Defender for Cloud, and is set to collect Security Events, you have two options:
To turn off automatic provisioning of an agent:
## Next steps
-This page explained how to enable auto provisioning for the Log Analytics agent and other Defender for Cloud extensions. It also described how to define a Log Analytics workspace in which to store the collected data. Both operations are required to enable data collection. Storing data in Log Analytics, whether you use a new or existing workspace, might incur more charges for data storage. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+This page explained how to enable auto provisioning for the Log Analytics agent and other Defender for Cloud extensions. It also described how to define a Log Analytics workspace in which to store the collected data. Both operations are required to enable data collection. Storing data in Log Analytics, whether you use a new or existing workspace, might incur more charges for data storage. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/enable-enhanced-security.md
For the purpose of the Defender for Cloud quickstarts and tutorials you must ena
You can protect an entire Azure subscription with Defender for Cloud's enhanced security features and the protections will be inherited by all resources within the subscription.
-A free 30-day trial is available. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+A free 30-day trial is available. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Enable enhanced security features from the Azure portal
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/enhanced-security-features-overview.md
Last updated 11/14/2021
The enhanced security features are free for the first 30 days. At the end of 30 days, if you decide to continue using the service, we'll automatically start charging for usage.
-You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
:::image type="content" source="media/enhanced-security-features-overview/defender-plans-top.png" alt-text="Enabling Microsoft Defender for Cloud's enhanced security features.":::
If the workspace is in the legacy Per Node pricing tier, the Defender for Cloud
This article explained Defender for Cloud's pricing options. For related material, see: - [How to optimize your Azure workload costs](https://azure.microsoft.com/blog/how-to-optimize-your-azure-workload-costs/)-- [Pricing details according to currency or region](https://azure.microsoft.com/pricing/details/security-center/)
+- [Pricing details according to currency or region](https://azure.microsoft.com/pricing/details/defender-for-cloud/)
- You may want to manage your costs and limit the amount of data collected for a solution by limiting it to a particular set of agents. Use [solution targeting](../azure-monitor/insights/solution-targeting.md) to apply a scope to the solution and target a subset of computers in the workspace. If you're using solution targeting, Defender for Cloud lists the workspace as not having a solution.
defender-for-cloud Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/get-started.md
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Defender for Cloud provides unified security management and threat protection across your hybrid and multi-cloud workloads. While the free features offer limited security for your Azure resources only, enabling enhanced security features extends these capabilities to on-premises and other clouds. Defender for Cloud helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. You can try the enhanced security features at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+Defender for Cloud provides unified security management and threat protection across your hybrid and multi-cloud workloads. While the free features offer limited security for your Azure resources only, enabling enhanced security features extends these capabilities to on-premises and other clouds. Defender for Cloud helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. You can try the enhanced security features at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
This quickstart section will walk you through all the recommended steps to enable Microsoft Defender for Cloud and the enhanced security features. When you've completed all the quickstart steps, you'll have:
defender-for-cloud Investigate Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/investigate-resource-health.md
Last updated 11/09/2021
> The resource health page described in this tutorial is a preview release. > [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] |
-|Pricing:|**Microsoft Defender for DNS** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+|Pricing:|**Microsoft Defender for DNS** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
The resource health page provides a snapshot view of the overall health of a single resource. You can review detailed information about the resource and all recommendations that apply to that resource. Also, if you're using any of the [advanced protection plans of Microsoft Defender for Cloud](defender-for-cloud-introduction.md), you can see outstanding security alerts for that specific resource too.
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/partner-integration.md
Defender for Cloud makes it easy to enable integrated security solutions in Azur
- **Integrated detections**: Security events from partner solutions are automatically collected, aggregated, and displayed as part of Defender for Cloud alerts and incidents. These events also are fused with detections from other sources to provide advanced threat-detection capabilities. - **Unified health monitoring and management**: Customers can use integrated health events to monitor all partner solutions at a glance. Basic management is available, with easy access to advanced setup by using the partner solution.
-Currently, integrated security solutions include vulnerability assessment by [Qualys](https://www.qualys.com/public-cloud/#azure) and [Rapid7](https://www.rapid7.com/products/insightvm/) and [Microsoft Azure Web Application Firewall on Azure Application Gateway](../web-application-firewall/ag/ag-overview.md).
+Currently, integrated security solutions include vulnerability assessment by [Qualys](https://www.qualys.com/public-cloud/#azure) and [Rapid7](https://www.rapid7.com/products/insightvm/).
> [!NOTE] > Defender for Cloud does not install the Log Analytics agent on partner virtual appliances because most security vendors prohibit external agents running on their appliances.
To learn more about the integration of vulnerability scanning tools from Qualys,
Defender for Cloud also offers vulnerability analysis for your:
-* SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
+- SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
* Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md) ## How security solutions are integrated
defender-for-cloud Security Center Readiness Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/security-center-readiness-roadmap.md
Articles
* [Defender for Cloud Documentation Page](./index.yml) * [Defender for Cloud REST API Documentation Page](/previous-versions/azure/reference/mt704034(v=azure.100)) * [Defender for Cloud frequently asked questions (FAQ)](./faq-general.yml)
-* [Pricing page](https://azure.microsoft.com/pricing/details/security-center/)
+* [Pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)
* [Identity security best practices](../security/fundamentals/identity-management-best-practices.md) * [Network security best practices](../security/fundamentals/network-best-practices.md) * [PaaS recommendations](../security/fundamentals/paas-deployments.md)
defender-for-cloud Supported Machines Endpoint Solutions Clouds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds.md
The two **tabs** below show the features of Microsoft Defender for Cloud that ar
> [!TIP]
->To experiment with features that are only available with enhanced security features enabled, you can enroll in a 30-day trial. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+>To experiment with features that are only available with enhanced security features enabled, you can enroll in a 30-day trial. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Supported endpoint protection solutions <a name="endpoint-supported"></a>
defender-for-cloud Tutorial Security Incident https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/tutorial-security-incident.md
In this tutorial, you will learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. ## Prerequisites
-To step through the features covered in this tutorial, you must have Defender for Cloud's enhanced security features enabled. You can try these at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/). The quickstart [Get started with Defender for Cloud](get-started.md) walks you through how to upgrade.
+To step through the features covered in this tutorial, you must have Defender for Cloud's enhanced security features enabled. You can try these at no cost. To learn more, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). The quickstart [Get started with Defender for Cloud](get-started.md) walks you through how to upgrade.
## Triage security alerts
defender-for-cloud Windows Admin Center Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/windows-admin-center-integration.md
Windows Admin Center is a management tool for your Windows servers. It's a singl
> [!NOTE] > Your Azure subscription and the associated Log Analytics workspace both need to have Microsoft Defender for Cloud's enhanced security features enabled in order to enable the Windows Admin Center integration.
-> Enhanced security features are free for the first 30 days if you haven't previously used it on the subscription and workspace. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/security-center/).
+> Enhanced security features are free for the first 30 days if you haven't previously used it on the subscription and workspace. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
> When you've successfully onboarded a server from Windows Admin Center to Microsoft Defender for Cloud, you can:
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md
Activation and setup of the on-premises management console ensures that:
## Sign in for the first time
-**To sign in to the management console:**
+**To sign in to the on-premises management console:**
1. Navigate to the IP address you received for the on-premises management console during the system installation.
-
-1. Enter the username and password you received for the on-premises management console during the system installation.
+1. Enter the username and password you received for the on-premises management console during the system installation.
If you forgot your password, select the **Recover Password** option, and see [Password recovery](how-to-manage-the-on-premises-management-console.md#password-recovery) for instructions on how to recover your password.
After connecting, you must set up a site with these sensors.
Enable a secured tunneling connection between organizational sensors and the on-premises management console. This setup circumvents interaction with the organizational firewall, and as a result reduces the attack surface.
-Using tunneling allows you to connect to the on-premises management console from its IP address and a single port (that is, 9000) to any sensor.
+Using tunneling allows you to connect to the on-premises management console from its IP address and a single port (9000 by default) to any sensor.
+ **To set up tunneling at the on-premises management console:**
Using tunneling allows you to connect to the on-premises management console from
```bash cyberx-management-tunnel-enable
- service apache2 reload
- sudo cyberx-management-tunnel-add-xsense --xsenseuid <sensorIPAddress> --xsenseport 9000
- service apache2 reload
+
+ ```
+
+Allow a few minutes for the connection to initiate.
+
+You can also customize the port range to a number other than 9000 for example. 10000.
+
+**To use a new port:**
+
+- Sign in to the on-premises management console and run the following command:
+
+ ```bash
+ sudo cyberx-management-tunnel-enable --port 10000
+
```
-**To set up tunneling on the sensor:**
+Disable the connection, when required.
+
+**To disable:**
+
+- Sign in to the on-premises management console and run the following command:
+
+ ```bash
+ cyberx-management-tunnel-disable
+
+ ```
+
+No configuration is needed on the sensor.
+
+**Log files**
-1. Open TCP port 9000 on the sensor (network.properties) manually. If the port is not open, the sensor will reject the connection from the on-premises management console.
+Review log information in the log files.
-2. Sign in to each sensor and run the following commands:
+**To access log files:**
- ```bash
- sudo cyberx-xsense-management-connect -ip <on-premises management console IP Address> -token < Copy the string that appears after the IP colon (:) from the Connection String field, Management Console Connection dialog box>
- sudo cyberx-xsense-management-tunnel
- sudo vi /var/cyberx/properties/network.properties
- opened_tcp_incoming_ports=22,80,443,9000
- sudo cyberx-xsense-network-validation
- sudo /etc/network/if-up.d/iptables-recover
- sudo iptables -nvL
- ```
+1. Log into the On-premises management console and go to: /var/log/apache2.log
+1. Log into the the sensor and go to: /var/cyberx/logs/tunnel.log
## Set up a site
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Open these ports to allow extra services for Defender for IoT.
| Proxy | TCP/UDP | In/Out | 443 | Proxy | To connect the sensor to a proxy server | On-premises management console and Sensor | Proxy server | | Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server. | On-premises management console and Sensor | Syslog server | | LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to log in to the system. | On-premises management console and Sensor | LDAPS server |
-| Tunneling | TCP | In | 9000 </br></br> in addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console. </br></br> Port 22 from the sensor to the on-premises management console. | Monitoring | Tunneling | Sensor | On-premises management console |
+| Tunneling | TCP | In | 9000 </br></br> in addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console. </br></br> Port 22 from the sensor to the on-premises management console. | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
### Plan rack installation
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
* Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration. > [!IMPORTANT] > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows DMS service to upload the database backup files to and use for migrating databases. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
+ > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
> - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration. > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?preserve-view=true&view=sql-server-2017). > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
> [!IMPORTANT] > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.-
- > - Azure Storage Account with Private endpoint is not supported by Azure Database Migration service.
-
+ > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
> - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration. > - You should take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server). > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online.md
To complete this tutorial, you need to:
> Regarding the storage account used as part of the migration, you must either: > * Choose to allow all network to access the storage account. > * Turn on [subnet delegation](../virtual-network/manage-subnet-delegation.md) on MI subnet and update the Storage Account firewall rules to allow this subnet.
-
- >- Azure Storage account with the private endpoint is not supported by Azure Database Migration Service.
+ > * You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md). * Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-managed-instance.md
To complete this tutorial, you need to:
- Create a blob container and retrieve its SAS URI by using the steps in the article [Manage Azure Blob Storage resources with Storage Explorer](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container), be sure to select all permissions (Read, Write, Delete, List) on the policy window while creating the SAS URI. This detail provides Azure Database Migration Service with access to your storage account container for uploading the backup files used for migrating databases to SQL Managed Instance. > [!NOTE]
- > Azure Database Migration Service does not support using an account level SAS token when configuring the Storage Account settings during the [Configure Migration Settings](#configure-migration-settings) step.
-
-
- >- Azure Storage Account with Private endpoint is not supported by Azure Database Migration service.
+ > - Azure Database Migration Service does not support using an account level SAS token when configuring the Storage Account settings during the [Configure Migration Settings](#configure-migration-settings) step.
+ > - You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
[!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)]
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/about-fastpath.md
To configure FastPath, the virtual network gateway must be either:
While FastPath supports most configurations, it doesn't support the following features:
-* UDR on the gateway subnet: This UDR has no impact on the network traffic that FastPath sends directly from your on-premises network to the virtual machines in Azure virtual network.
+* UDR on the gateway subnet: FastPath doesn't honor UDRs configured on the gateway subnet. FastPath traffic bypasses any next-hops determined by UDRs configured on the gateway subnet.
* Basic Load Balancer: If you deploy a Basic internal load balancer in your virtual network or the Azure PaaS service you deploy in your virtual network uses a Basic internal load balancer, the network traffic from your on-premises network to the virtual IPs hosted on the Basic load balancer will be sent to the virtual network gateway. The solution is to upgrade the Basic load balancer to a [Standard load balancer](../load-balancer/load-balancer-overview.md).
This preview is available in the following Azure Regions.
- West US 2 - West US 3
+This preview supports connectivity to the following Azure
+- Azure Cosmos DB
+- Azure Key Vault
+- Azure Storage
+- Third Party Private Link Services
+ > [!NOTE] > Private Link pricing will not apply to traffic sent over ExpressRoute FastPath during Public preview. For more information about pricing, check out the [Private Link pricing page](https://azure.microsoft.com/pricing/details/private-link/). >
expressroute Expressroute Troubleshooting Expressroute Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md
Title: 'Azure ExpressRoute: Verify Connectivity - Troubleshooting Guide'
-description: This page provides instructions on troubleshooting and validating end to end connectivity of an ExpressRoute circuit.
+description: This page provides instructions on troubleshooting and validating end-to-end connectivity of an ExpressRoute circuit.
Previously updated : 10/31/2019 Last updated : 01/07/2022
This article helps you verify and troubleshoot ExpressRoute connectivity. Expres
>
-The purpose of this document is to help user to identify if and where a connectivity issue exists. Thereby, to help seek support from the appropriate team to resolve an issue. If Microsoft support is needed to resolve an issue, open a support ticket with [Microsoft Support][Support].
+The purpose of this document is to help you identify if and where a connectivity issue exists. Thereby, to help seek support from the appropriate team to resolve an issue. If Microsoft support is needed to resolve an issue, open a support ticket with [Microsoft Support][Support].
> [!IMPORTANT] > This document is intended to help diagnosing and fixing simple issues. It is not intended to be a replacement for Microsoft support. Open a support ticket with [Microsoft Support][Support] if you are unable to solve the problem using the guidance provided.
The following are the logical steps, in troubleshooting ExpressRoute circuit:
* [Confirm the traffic flow](#confirm-the-traffic-flow)
+* [Test private peering connectivity](#test-private-peering-connectivity)
+ ## Verify circuit provisioning and state Provisioning an ExpressRoute circuit establishes a redundant Layer 2 connections between CEs/PE-MSEEs (2)/(4) and MSEEs (5). For more information on how to create, modify, provision, and verify an ExpressRoute circuit, see the article [Create and modify an ExpressRoute circuit][CreateCircuit].
Provisioning an ExpressRoute circuit establishes a redundant Layer 2 connections
> ### Verification via the Azure portal
-In the Azure portal, open the ExpressRoute circuit blade. In the ![3][3] section of the blade, the ExpressRoute essentials are listed as shown in the following screenshot:
+In the Azure portal, open the ExpressRoute circuit page. In the ![3][3] section of the page, the ExpressRoute essentials are listed as shown in the following screenshot:
![4][4]
After the service provider has completed the provisioning the ExpressRoute circu
> In IPVPN connectivity model, service providers handle the responsibility of configuring the peerings (layer 3 services). In such a model, after the service provider has configured a peering and if the peering is blank in the portal, try refreshing the circuit configuration using the refresh button on the portal. This operation will pull the current routing configuration from your circuit. >
-In the Azure portal, status of an ExpressRoute circuit peering can be checked under the ExpressRoute circuit blade. In the ![3][3] section of the blade, the ExpressRoute peerings would be listed as shown in the following screenshot:
+In the Azure portal, status of an ExpressRoute circuit peering can be checked under the ExpressRoute circuit page. In the ![3][3] section of the page, the ExpressRoute peerings would be listed as shown in the following screenshot:
![5][5]
-In the preceding example, as noted Azure private peering is provisioned, whereas Azure public and Microsoft peerings are not provisioned. A successfully provisioned peering context would also have the primary and secondary point-to-point subnets listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs. For the peerings that are provisioned, the listing also indicates who last modified the configuration.
+In the preceding example, as noted Azure private peering is provisioned, but Azure public and Microsoft peerings aren't provisioned. A successfully provisioned peering context would also have the primary and secondary point-to-point subnets listed. The /30 subnets are used for the interface IP address of the MSEEs and CEs/PE-MSEEs. For the peerings that are provisioned, the listing also indicates who last modified the configuration.
> [!NOTE] > If enabling a peering fails, check if the primary and secondary subnets assigned match the configuration on the linked CE/PE-MSEE. Also check if the correct *VlanId*, *AzureASN*, and *PeerASN* are used on MSEEs and if these values maps to the ones used on the linked CE/PE-MSEE. If MD5 hashing is chosen, the shared key should be same on MSEE and PE-MSEE/CE pair. Previously configured shared key would not be displayed for security reasons. Should you need to change any of these configuration on an MSEE router, refer to [Create and modify routing for an ExpressRoute circuit][CreatePeering].
$ckt = Get-AzExpressRouteCircuit -ResourceGroupName "Test-ER-RG" -Name "Test-ER-
Get-AzExpressRouteCircuitPeeringConfig -Name "MicrosoftPeering" -ExpressRouteCircuit $ckt ```
-If a peering is not configured, there would be an error message. A sample response, when the stated peering (Azure Public peering in this example) is not configured within the circuit:
+If a peering isn't configured, there would be an error message. A sample response, when the stated peering (Azure Public peering in this example) isn't configured within the circuit:
```azurepowershell Get-AzExpressRouteCircuitPeeringConfig : Sequence contains no matching element
Path : 123##
>
-The following example shows the response of the command for a peering that does not exist:
+The following example shows the response of the command for a peering that doesn't exist:
```azurepowershell Get-AzExpressRouteCircuitRouteTable : The BGP Peering AzurePublicPeering with Service Key ********************* is not found.
Get-AzExpressRouteCircuitRouteTable : The BGP Peering AzurePublicPeering with Se
StatusCode: 400 ```
+## Test private peering connectivity
+
+Test your private peering connectivity by **counting** packets arriving and leaving the Microsoft edge of your ExpressRoute circuit, on the Microsoft Enterprise Edge (MSEE) devices. This diagnostic tool works by applying an Access Control List (ACL) to the MSEE to count the number of packets that hit specific ACL rules. Using this tool will allow you to confirm connectivity by answering the questions such as:
+
+* Are my packets getting to Azure?
+* Are they getting back to on-prem?
+
+### Run test
+1. To access this diagnostic tool, select **Diagnose and solve problems** from your ExpressRoute circuit in the Azure portal.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/diagnose-problems.png" alt-text="Screenshot of diagnose and solve problem page from ExpressRoute circuit.":::
+
+1. Select the **Connectivity issues** card under **Common problems**.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/connectivity-issues.png" alt-text="Screenshot of connectivity issues option.":::
+
+1. In the dropdown for *Tell us more about the problem you are experiencing*, select **Connectivity to Azure Private, Azure Public, or Dynamics 365 services.**
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/tell-us-more.png" alt-text="Screenshot of drop-down option for problem user is experiencing.":::
+
+1. Scroll down to the **Test your private peering connectivity** section and expand it.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/test-private-peering.png" alt-text="Screenshot of troubleshooting connectivity issues options.":::
+
+1. Execute the [PsPing](https://docs.microsoft.com/sysinternals/downloads/psping) test from your on-premises IP address to your Azure IP address and keep it running during the connectivity test.
+
+1. Fill out the fields of the form, making sure to enter the same on-premises and Azure IP addresses used in Step 5. Then select **Submit** and then wait for your results to load. Once your results are ready, review the information for interpreting them below.
+
+ :::image type="content" source="./media/expressroute-troubleshooting-expressroute-overview/form.png" alt-text="Screenshot of debug ACL form.":::
+
+### Interpreting results
+Your test results for each MSEE device will look like the example below. You'll have two sets of results for the primary and secondary MSEE devices. Review the number of matches in and out and use the following scenarios to interpret the results:
+* **You see packet matches sent and received on both MSEEs:** This indicates healthy traffic inbound to and outbound from the MSEE on your circuit. If loss is occurring either on-premises or in Azure, it is happening downstream from the MSEE.
+* **If testing PsPing from on-premises to Azure *(received)* results show matches, but *sent* results show NO matches:** This indicates that traffic is getting inbound to Azure, but isn't returning to on-prem. Check for return-path routing issues (for example, are you advertising the appropriate prefixes to Azure? Is there a UDR overriding prefixes?).
+* **If testing PsPing from Azure to on-premises *(sent)* results show NO matches, but *(received)* results show matches:** This indicates that traffic is getting to on-premises, but isn't getting back. You should work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit.
+* **One MSEE shows NO matches, while the other shows good matches:** This indicates that one MSEE isn't receiving or passing any traffic. It could be offline (for example, BGP/ARP down).
+
+#### Example
+```
+src 10.0.0.0 dst 20.0.0.0 dstport 3389 (received): 120 matches
+src 20.0.0.0 srcport 3389 dst 10.0.0.0 (sent): 120 matches
+```
+This test result has the following properties:
+
+* IP Port: 3389
+* On-prem IP Address CIDR: 10.0.0.0
+* Azure IP Address CIDR: 20.0.0.0
+ ## Next Steps For more information or help, check out the following links:
firewall Firewall Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-performance.md
+
+ Title: Azure Firewall performance
+description: Compare Azure Firewall performance for Azure Firewall Standard and Premium
++++ Last updated : 01/11/2022+++
+# Azure Firewall performance
+
+Reliable firewall performance is essential to operate and protect your virtual networks in Azure. More advanced features (like those found in Azure Firewall Premium) require more processing complexity. This will affect firewall performance and impact the overall network performance.
+
+Azure Firewall has two versions: Standard and Premium.
+
+- Azure Firewall Standard
+
+ Azure Firewall Standard has been generally available since September 2018. It's cloud native, highly available, with built-in auto scaling firewall-as-a-service. You can centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level-filtering rules, and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains.
+- Azure Firewall Premium
+
+ Azure Firewall Premium is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. The features that might affect the performance of the Firewall are TLS inspection and IDPS (Intrusion Detection and Prevention).
+
+For more information about Azure Firewall, see [What is Azure Firewall?](overview.md)
+
+## Performance testing
+
+Before deploying Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. It's recommended to evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. This includes the network topology, and emulating the actual characteristics of the expected traffic through the firewall.
+
+## Performance data
+
+The following set of performance results demonstrates the maximal Azure Firewall throughput in various use cases. All use cases were measured while Threat intelligence mode was set to alert/deny.
++
+|Firewall type and use case |TCP/UDP bandwidth (Gbps) |HTTP/S bandwidth (Gbps) |
+||||
+|Standard |30|30|
+|Premium (no TLS/IDPS) |30|30|
+|Premium with TLS |-|30|
+|Premium with IDS |30|30|
+|Premium with IPS |10|10|
+
+> [!NOTE]
+> IPS (Intrusion Prevention System) takes place when one or more signatures are configured to *Alert and Deny* mode.
+
+Azure Firewall PremiumΓÇÖs new performance boost functionality is now in public preview and provides you with enhancements to the overall firewall performance as shown below:
++
+|Firewall use case |Without performance boost (Gbps) |With performance boost (Gbps) |
+||||
+|Standard<br>Max bandwidth for single TCP connection |1.3|-|
+|Premium<br>Max bandwidth for single TCP connection |2.6|9.5|
+|Premium max bandwidth with TLS/IDS|30|100|
+
+Performance values are calculated with Azure Firewall at full scale and with Premium performance boost enabled. Actual performance may vary depending on your rule complexity and network configuration. These metrics are updated periodically as performance continuously evolves with each release.
+
+## How to configure Premium performance boost (preview)
+
+As more applications are moved to the cloud, the network element performance becomes a bottleneck. As a result, Premium performance boost (preview) for Azure Firewall Premium is available to allow more scalability for those deployments.
+
+To enable the Azure Firewall Premium performance boost, run the following Azure PowerShell commands. This feature is applied at the **subscription** level for all Firewalls (VNet Firewalls and SecureHub Firewalls). Currently, Azure Firewall Premium Performance boost is not recommended SecureHub Firewalls. Check back here for the latest updates as we work to change this recommendation. Also, this setting does not have any effect on standard Firewalls.
+
+After you run the Azure PowerShell commands, an update operation needs to be run on the Azure Firewall for the feature to immediately take effect. This update operation can be a rule change (least intrusive), a setting configuration, or a Stop/Start operation. Otherwise, the firewall/s will update with the feature within several days.
+
+Run the following Azure PowerShell to configure the Azure Firewall Premium performance boost:
+
+```azurepowershell
+Connect-AzAccount
+
+Select-AzSubscription -Subscription "subscription_id or subscription_name"
+
+Register-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
+```
+
+Run the following Azure PowerShell to turn it off:
+
+```azurepowershell
+Unregister-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
+```
+++
+## Next steps
+
+- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/assignment-structure.md
You use JSON to create a policy assignment. The policy assignment contains eleme
- policy definition - non-compliance messages - parameters
+- identity
For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode with dynamic parameters:
In this example, the parameters previously defined in the policy definition are
same policy definition is reusable with a different set of parameters for a different department, reducing the duplication and complexity of policy definitions while providing flexibility.
+## Identity
+For policy assignments with effect set to **deployIfNotExisit** or **modify**, it is required to have an identity property to do remediation on non-compliant resources. When using identity, the user must also specify a location for the assignment.
+
+```json
+# System assigned identity
+ "identity": {
+ "type": "SystemAssigned"
+ }
+# User assigned identity
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/SubscriptionID/resourceGroups/testResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/test-identity": {}
+ }
+ },
+```
+++ ## Next steps - Learn about the [policy definition structure](./definition-structure.md).
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/effects.md
location of the Constraint template to use in Kubernetes to limit the allowed co
## DeployIfNotExists Similar to AuditIfNotExists, a DeployIfNotExists policy definition executes a template deployment
-when the condition is met.
+when the condition is met. Policy assignments with effect set as DeployIfNotExists require a [managed identity](../how-to/remediate-resources.md) to do remediation.
> [!NOTE] > [Nested templates](../../../azure-resource-manager/templates/linked-templates.md#nested-template)
Modify is used to add, update, or remove properties or tags on a subscription or
creation or update. A common example is updating tags on resources such as costCenter. Existing non-compliant resources can be remediated with a [remediation task](../how-to/remediate-resources.md). A single Modify rule can have any number of
-operations.
+operations. Policy assignments with effect set as Modify require a [managed identity](../how-to/remediate-resources.md) to do remediation.
The following operations are supported by Modify:
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/remediate-resources.md
Title: Remediate non-compliant resources description: This guide walks you through the remediation of resources that are non-compliant to policies in Azure Policy. Previously updated : 08/17/2021 Last updated : 12/1/2021 # Remediate non-compliant resources with Azure Policy
understand and accomplish remediation with Azure Policy.
## How remediation security works
-When Azure Policy runs the template in the **deployIfNotExists** policy definition, it does so using
-a [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md).
-Azure Policy creates a managed identity for each assignment, but must have details about what roles
+When Azure Policy starts a template deployment when evaluating **deployIfNotExists** policies or modifies a resource when evaluating **modify** policies, it does so using
+a [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md) that is associated with the policy assignment.
+Policy assignments can either use a system assigned managed identity that is created by the policy service or a user assigned identity provided by the user. The managed identity needs to be granted the appropriate roles required for remediating resources
to grant the managed identity. If the managed identity is missing roles, an error is displayed during the assignment of the policy or an initiative. When using the portal, Azure Policy automatically grants the managed identity the listed roles once assignment starts. When using SDK, the roles must manually be granted to the managed identity. The _location_ of the managed identity doesn't impact its operation with Azure Policy. > [!IMPORTANT] > In the following scenarios, the assignment's managed identity must be
doesn't impact its operation with Azure Policy.
The first step is to define the roles that **deployIfNotExists** and **modify** needs in the policy definition to successfully deploy the content of your included template. Under the **details**
-property, add a **roleDefinitionIds** property. This property is an array of strings that match
+property in the policy definition, add a **roleDefinitionIds** property. This property is an array of strings that match
roles in your environment. For a full example, see the [deployIfNotExists example](../concepts/effects.md#deployifnotexists-example) or the [modify examples](../concepts/effects.md#modify-examples).
az role definition list --name 'Contributor'
## Manually configure the managed identity
-When creating an assignment using the portal, Azure Policy both generates the managed identity and
-grants it the roles defined in **roleDefinitionIds**. In the following conditions, steps to create
+When creating an assignment using the portal, Azure Policy can both generate a managed identity and
+grant it the roles defined in **roleDefinitionIds**. In the following conditions, steps to create
the managed identity and assign it permissions must be done manually: - While using the SDK (such as Azure PowerShell) - When a resource outside the assignment scope is modified by the template - When a resource outside the assignment scope is read by the template
+## Configure a managed identity through the Azure portal
+
+When creating an assignment using the portal, you can select either a system assigned managed identity or a user assigned managed identity.
+
+To set a system assigned managed identity in the portal:
+
+1. On the **Remediation** tab of the create/edit assignment view, under **Types of Managed Identity**, ensure that **System assigned managed identity**
+is selected.
+
+1. Specify the location at which the managed identity is to be located.
+
+To set a user assigned managed identity in the portal:
+
+1. On the **Remediation** tab of the create/edit assignment view, under **Types of Managed Identity**, ensure that **User assigned managed identity**
+is selected.
+
+1. Specify the scope where the managed identity is hosted. The scope of the managed identity does not have to equate to the scope of the assignment, but it must be in the same tenant.
+
+1. Under **Existing user assigned identities**, select the managed identity.
+
+ > [!NOTE]
+ > If the managed identity does not have the permissions needed to execute the required remediation task, it will be granted permissions *automatically* only through the portal. For all other methods, permissions must be configured manually.
+ >
+ ### Create managed identity with PowerShell
-To create a managed identity during the assignment of the policy, **Location** must be defined and
-**AssignIdentity** used. The following example gets the definition of the built-in policy **Deploy
+To create an identity during the assignment of the policy, **Location** must be defined and **Identity** used. The following example gets the definition of the built-in policy **Deploy SQL DB transparent data encryption** sets the target resource group, and then creates the assignment using a **system assigned** managed identity.
+
+```azurepowershell-interactive
+# Login first with Connect-AzAccount if not using Cloud Shell
+
+# Get the built-in "Deploy SQL DB transparent data encryption" policy definition
+$policyDef = Get-AzPolicyDefinition -Id '/providers/Microsoft.Authorization/policyDefinitions/86a912f6-9a06-4e26-b447-11b16ba8659f'
+
+# Get the reference to the resource group
+$resourceGroup = Get-AzResourceGroup -Name 'MyResourceGroup'
+
+# Create the assignment using the -Location and -Identity properties
+$assignment = New-AzPolicyAssignment -Name 'sqlDbTDE' -DisplayName 'Deploy SQL DB transparent data encryption' -Scope $resourceGroup.ResourceId -PolicyDefinition $policyDef -Location 'westus' -IdentityType "SystemAssigned"
+```
+
+The following example gets the definition of the built-in policy **Deploy
SQL DB transparent data encryption**, sets the target resource group, and then creates the
-assignment.
+assignment using an **user assigned** managed identity.
```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell
$policyDef = Get-AzPolicyDefinition -Id '/providers/Microsoft.Authorization/poli
# Get the reference to the resource group $resourceGroup = Get-AzResourceGroup -Name 'MyResourceGroup'
-# Create the assignment using the -Location and -AssignIdentity properties
-$assignment = New-AzPolicyAssignment -Name 'sqlDbTDE' -DisplayName 'Deploy SQL DB transparent data encryption' -Scope $resourceGroup.ResourceId -PolicyDefinition $policyDef -Location 'westus' -AssignIdentity
+# Get the existing user assigned managed identity ID
+$userassignedidentity = Get-AzUserAssignedIdentity -ResourceGroupName $rgname -Name $userassignedidentityname
+$userassignedidentityid = $userassignedidentity.Id
+
+# Create the assignment using the -Location and -Identity properties
+$assignment = New-AzPolicyAssignment -Name 'sqlDbTDE' -DisplayName 'Deploy SQL DB transparent data encryption' -Scope $resourceGroup.ResourceId -PolicyDefinition $policyDef -Location 'westus' -IdentityType "UserAssigned" -IdentityId $userassignedidentityid
```
-The `$assignment` variable now contains the principal ID of the managed identity along with the
-standard values returned when creating a policy assignment. It can be accessed through
-`$assignment.Identity.PrincipalId`.
+The `$assignment` variable now contains the principal ID of the managed identity along with the standard values returned when creating a policy assignment. It can be accessed through
+`$assignment.Identity.PrincipalId` for system assigned managed identities and `$assignment.Identity.UserAssignedIdentities[$userassignedidentityid].PrincipalId` for user assigned managed identities.
-### Grant defined roles with PowerShell
+### Grant a managed identity defined roles with PowerShell
The new managed identity must complete replication through Azure Active Directory before it can be granted the needed roles. Once replication is complete, the following example iterates the policy
if ($roleDefinitionIds.Count -gt 0)
} ```
-### Grant defined roles through portal
+### Grant a managed identity defined roles through the portal
There are two ways to grant an assignment's managed identity the defined roles using the portal, by using **Access control (IAM)** or by editing the policy or initiative assignment and selecting
To add a role to the assignment's managed identity, follow these steps:
Leave **Assign access to** set to the default of 'Azure AD user, group, or application'. In the **Select** box, paste or type the portion of the assignment resource ID located earlier. Once the search completes, select the object with the same name to select ID and select **Save**.-
+
## Create a remediation task
-### Create a remediation task through portal
+The following sections describe how to create a remediation task.
+
+### Create a remediation task through the portal
During evaluation, the policy assignment with **deployIfNotExists** or **modify** effects determines if there are non-compliant resources or subscriptions. When non-compliant resources or subscriptions
To create a **remediation task**, follow these steps:
> An alternate way to open the **remediation task** page is to find and select the policy from > the **Compliance** page, then select the **Create Remediation Task** button.
-1. On the **New remediation task** page, filter the resources to remediate by using the **Scope**
+1. On the **New remediation task** page, optional remediation settings are shown:
+
+ - **Failure Threshold percentage** - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%.
+ - **Resource Count** - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number of is 10,000 resources.
+ - **Parallel Deployments** - Determines how many resources to remediate at the same time. The allowed values are 1 to 15 resources at a time. The default value is 10.
+
+ > [!NOTE]
+ > These settings cannot be changed once the remediation task has started.
+
+1. On the same page, filter the resources to remediate by using the **Scope**
ellipses to pick child resources from where the policy is assigned (including down to the individual resource objects). Additionally, use the **Locations** dropdown list to further filter
- the resources. Only resources listed in the table will be remediated.
+ the resources.
:::image type="content" source="../media/remediate-resources/select-resources.png" alt-text="Screenshot of the Remediate node and the grid of resources to remediate." border="false":::
To create a **remediation task**, follow these steps:
:::image type="content" source="../media/remediate-resources/task-progress.png" alt-text="Screenshot of the Remediation tasks tab and progress of existing remediation tasks." border="false":::
-1. Select on the **remediation task** from the policy compliance page to get details about the
- progress. The filtering used for the task is shown along with a list of the resources being
+1. Select the **remediation task** from the policy compliance page to get details about the
+ progress. The filtering used for the task is shown along with status and a list of resources being
remediated. 1. From the **Remediation task** page, select and hold (or right-click) on a resource to view either the remediation
iot-edge Reference Iot Edge For Linux On Windows Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/reference-iot-edge-for-linux-on-windows-functions.md
The **Deploy-Eflow** command is the main deployment method. The deployment comma
| acceptOptionalTelemetry | **Yes** or **No** | A shortcut to accept/deny optional telemetry and bypass the telemetry prompt. | | cpuCount | Integer value between 1 and the device's CPU cores | Number of CPU cores for the VM.<br><br>**Default value**: 1 vCore. | | memoryInMB | Integer value between 1024 and the maximum amount of free memory of the device |Memory allocated for the VM.<br><br>**Default value**: 1024 MB. |
-| vmDiskSize | Between 8 GB and 256 GB | Maximum disk size of the dynamically expanding virtual hard disk.<br><br>**Default value**: 10 GB. |
+| vmDiskSize | Between 8 GB and 2 TB | Maximum disk size of the dynamically expanding virtual hard disk.<br><br>**Default value**: 16 GB. |
| vswitchName | Name of the virtual switch | Name of the virtual switch assigned to the EFLOW VM. | | vswitchType | **Internal** or **External** | Type of the virtual switch assigned to the EFLOW VM. | | ip4Address | IPv4 Address in the range of the DCHP Server Scope | Static Ipv4 address of the EFLOW VM. |
iot-hub Iot Concepts And Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-concepts-and-iot-hub.md
Last updated 07/07/2021+ #Customer intent: As a developer new to IoT Hub, learn the basic concepts.
You can integrate IoT Hub with other Azure services to build complete, end-to-en
* [Azure Stream Analytics](../stream-analytics/index.yml) to run real-time analytic computations on the data streaming from your devices.
+[IoT Central](../iot-central/core/overview-iot-central.md) applications use multiple IoT hubs as part of their scalable and resilient infrastructure.
+ IoT Hub has a 99.9% [Service Level Agreement for IoT Hub](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole. Each Azure subscription has default quota limits in place to prevent service abuse. These limits could impact the scope of your IoT solution. The current limit on a per-subscription basis is 50 IoT hubs per subscription. You can request quota increases by contacting support. For more information, see [IoT Hub Quotas and Throttling](iot-hub-devguide-quotas-throttling.md). For more details on quota limits, see one of the following articles:
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/how-to-configure-key-rotation.md
Last updated 11/24/2021
# Configure key auto-rotation in Azure Key Vault (preview)
-> [!IMPORTANT]
+> [!WARNING]
> This feature is currently disabled due to an issue with the service. ## Overview
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/overview-what-is-azure-load-testing.md
You can create a load test by using existing test scripts based on Apache JMeter
Azure Load Testing test engines abstract the required infrastructure for running a high-scale load test. The test engines run the Apache JMeter script to simulate a large number of virtual users simultaneously accessing your application endpoints. To scale out the load test, you can configure the number of test engines.
+Azure Load Testing uses Apache JMeter version 5.4.1 for running load tests.
+ The application can be hosted anywhere: in Azure, on-premises, or in other clouds. During the load test, the service collects the following resource metrics and displays them in a dashboard: - *Client-side metrics* give you details reported by the test engine. These details include the number of virtual users, the request response time, or the number of requests per second.
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/quickstart-create-and-run-load-test.md
Previously updated : 11/30/2021 Last updated : 01/11/2022 adobe-target: true
In this section, you'll create a sample Apache JMeter script that you'll use in
<?xml version="1.0" encoding="UTF-8"?> <jmeterTestPlan version="1.2" properties="5.0" jmeter="5.4.1"> <hashTree>
- <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Test Plan" enabled="true">
+ <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Azure Load Testing Quickstart" enabled="true">
<stringProp name="TestPlan.comments"></stringProp> <boolProp name="TestPlan.functional_mode">false</boolProp> <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
In this section, you'll create a sample Apache JMeter script that you'll use in
<stringProp name="TestPlan.user_define_classpath"></stringProp> </TestPlan> <hashTree>
- <kg.apc.jmeter.threads.UltimateThreadGroup guiclass="kg.apc.jmeter.threads.UltimateThreadGroupGui" testclass="kg.apc.jmeter.threads.UltimateThreadGroup" testname="jp@gc - Ultimate Thread Group" enabled="true">
- <collectionProp name="ultimatethreadgroupdata">
- <collectionProp name="1400604752">
- <stringProp name="1567">5</stringProp>
- <stringProp name="0">0</stringProp>
- <stringProp name="48873">30</stringProp>
- <stringProp name="49710">60</stringProp>
- <stringProp name="10">10</stringProp>
- </collectionProp>
- </collectionProp>
+ <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Thread Group" enabled="true">
+ <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
<elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true"> <boolProp name="LoopController.continue_forever">false</boolProp> <intProp name="LoopController.loops">-1</intProp> </elementProp>
- <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
- </kg.apc.jmeter.threads.UltimateThreadGroup>
+ <stringProp name="ThreadGroup.num_threads">5</stringProp>
+ <stringProp name="ThreadGroup.ramp_time">10</stringProp>
+ <boolProp name="ThreadGroup.scheduler">true</boolProp>
+ <stringProp name="ThreadGroup.duration">120</stringProp>
+ <stringProp name="ThreadGroup.delay">5</stringProp>
+ <boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp>
+ </ThreadGroup>
<hashTree>
- <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="homepage" enabled="true">
+ <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="Homepage" enabled="true">
<elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true"> <collectionProp name="Arguments.arguments"/> </elementProp> <stringProp name="HTTPSampler.domain">your-endpoint-url</stringProp> <stringProp name="HTTPSampler.port"></stringProp>
- <stringProp name="HTTPSampler.protocol">https</stringProp>
+ <stringProp name="HTTPSampler.protocol"></stringProp>
<stringProp name="HTTPSampler.contentEncoding"></stringProp> <stringProp name="HTTPSampler.path"></stringProp> <stringProp name="HTTPSampler.method">GET</stringProp>
In this section, you'll create a sample Apache JMeter script that you'll use in
<boolProp name="HTTPSampler.use_keepalive">true</boolProp> <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp> <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
- <stringProp name="HTTPSampler.implementation">HttpClient4</stringProp>
- <stringProp name="HTTPSampler.connect_timeout">60000</stringProp>
- <stringProp name="HTTPSampler.response_timeout">60000</stringProp>
+ <stringProp name="HTTPSampler.connect_timeout"></stringProp>
+ <stringProp name="HTTPSampler.response_timeout"></stringProp>
</HTTPSamplerProxy> <hashTree/> </hashTree>
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-cicd-github-actions.md
The sample application's source repo includes an Apache JMeter script named *Sam
## Set up GitHub access permissions for Azure
-In this section, you'll configure your GitHub repository to have permissions for accessing the Azure Load Testing resource.
+The GitHub Actions workflow needs to authenticate with Azure to access Azure resources. In the sample application, you use the [Azure Login](https://github.com/Azure/login) action and an Azure Active Directory service principal to authenticate with Azure.
-To access Azure resources, you'll create an Azure Active Directory service principal and use role-based access control to assign the necessary permissions.
+In this section, you'll configure your GitHub repository to have permissions to access your Azure load testing resource:
-1. Run the following Azure CLI command to create a service principal:
+1. Run the following Azure CLI command to create a service principal and assign the Contributor role:
```azurecli az ad sp create-for-rbac --name "my-load-test-cicd" --role contributor \
To access Azure resources, you'll create an Azure Active Directory service princ
} ```
+ > [!NOTE]
+ > Azure Login supports multiple ways to authenticate with Azure. For other authentication options, see the [Azure and GitHub integration site](/azure/developer/github).
+ 1. Go to your forked GitHub repository for the sample application.
+ You'll add a GitHub secret to your repository for the service principal you created in the previous step. The Azure Login action uses this secret to authenticate with Azure.
+ 1. Add a new secret to your GitHub repository by selecting **Settings** > **Secrets** > **New repository secret**. :::image type="content" source="./media/tutorial-cicd-github-actions/github-new-secret.png" alt-text="Screenshot that shows selections for adding a new repository secret to your GitHub repo.":::
To access Azure resources, you'll create an Azure Active Directory service princ
--subscription "<subscription-name-or-id>" ```
+You can now use the `AZURE_CREDENTIALS` secret with the Azure Login action in your CI/CD workflow. The following code snippet describes how this works for the sample application:
+
+```yml
+jobs:
+ build-and-deploy:
+ # The type of runner that the job will run on
+ runs-on: ubuntu-latest
+
+ # Steps represent a sequence of tasks that will be executed as part of the job
+ steps:
+ # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
+ - name: Checkout GitHub Actions
+ uses: actions/checkout@v2
+
+ - name: Login to Azure
+ uses: azure/login@v1
+ continue-on-error: false
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+```
+ ## Configure the GitHub Actions workflow to run a load test In this section, you'll set up a GitHub Actions workflow that triggers the load test. The sample application repository contains a workflow file *SampleApp.yaml*. The workflow first deploys the sample web application to Azure App Service, and then invokes the load test. The GitHub action uses an environment variable to pass the URL of the web application to the Apache JMeter script.
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview.md
You might also want to explore other quickstart guides for Azure Logic Apps:
* [Create a multi-tenant based logic app using an ARM template](quickstart-create-deploy-azure-resource-manager-template.md)
-* [Create a multi-tenant based logic app using the Azure CLI](quickstart-create-deploy-azure-resource-manager-template.md)
+* [Create a multi-tenant based logic app using the Azure CLI](quickstart-logic-apps-azure-cli.md)
## Other resources
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
Title: Access Azure resources from managed endpoint
+ Title: Access Azure resources from an online endpoint
description: Securely access Azure resources for your machine learning model deployment from an online endpoint with a system-assigned or user-assigned managed identity.
Previously updated : 12/22/2021 Last updated : 01/11/2022
This guide assumes you don't have a managed identity, a storage account or an on
## Define configuration YAML file for deployment
-To deploy a managed endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-managed-online.md) document.
+To deploy an online endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-managed-online.md) document.
The YAML files in the following examples are used to create online endpoints.
Decide on the name of your user identity name, and export that value as an envir
## Create the managed identity
-To access Azure resources, create a system-assigned or user-assigned managed identity for your endpoint.
+To access Azure resources, create a system-assigned or user-assigned managed identity for your online endpoint.
# [System-assigned managed identity](#tab/system-identity)
To create a user-assigned managed identity, use the following:
## Create storage account and container For this example, create a blob storage account and blob container, and then upload the previously created text file to the blob container.
-This is the storage account and blob container that you'll give the endpoint and managed identity access to.
+This is the storage account and blob container that you'll give the online endpoint and managed identity access to.
# [System-assigned managed identity](#tab/system-identity)
Then, upload file in container.
The following code creates an online endpoint without specifying a deployment. # [System-assigned managed identity](#tab/system-identity)
-When you create a managed endpoint, a system-assigned managed identity is created for the endpoint by default.
+When you create an online endpoint, a system-assigned managed identity is created for the endpoint by default.
>[!IMPORTANT] > System assigned managed identities are immutable and can't be changed once created.
If you encounter any issues, see [Troubleshooting online endpoints deployment an
>[!IMPORTANT] > Online endpoints require Azure Container Registry pull permission, AcrPull permission, to the container registry and Storage Blob Data Reader permission to the default datastore of the workspace.
-You can allow the managed endpoint permission to access your storage via its system-assigned managed identity or give permission to the user-assigned managed identity to access the storage account created in the previous section.
+You can allow the online endpoint permission to access your storage via its system-assigned managed identity or give permission to the user-assigned managed identity to access the storage account created in the previous section.
# [System-assigned managed identity](#tab/system-identity)
Refer to the following script to understand how to use your identity token to ac
## Create a deployment with your configuration
-Create a deployment that's associated with the managed endpoint. [Learn more about deploying to online endpoints](how-to-deploy-managed-online-endpoints.md).
+Create a deployment that's associated with the online endpoint. [Learn more about deploying to online endpoints](how-to-deploy-managed-online-endpoints.md).
>[!WARNING] > This deployment can take approximately 8-14 minutes depending on whether the underlying environment/image is being built for the first time. Subsequent deployments using the same environment will go quicker.
When your deployment completes, the model, the environment, and the endpoint ar
## Confirm your endpoint deployed successfully
-Once your endpoint is deployed, confirm its operation. Details of inferencing vary from model to model. For this guide, the JSON query parameters look like:
+Once your online endpoint is deployed, confirm its operation. Details of inferencing vary from model to model. For this guide, the JSON query parameters look like:
:::code language="json" source="~/azureml-examples-main/cli/endpoints/online/model-1/sample-request.json" :::
To call your endpoint, run:
## Delete the endpoint and storage account
-If you don't plan to continue using the deployed endpoint and storage, delete them to reduce costs. When you delete the endpoint, all of its associated deployments are deleted as well.
+If you don't plan to continue using the deployed online endpoint and storage, delete them to reduce costs. When you delete the endpoint, all of its associated deployments are deleted as well.
# [System-assigned managed identity](#tab/system-identity)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-workspace-vnet.md
Previously updated : 11/19/2021 Last updated : 01/11/2022
When ACR is behind a virtual network, Azure Machine Learning cannot use it to di
> [!WARNING] > If your Azure Container Registry uses a private endpoint to communicate with the virtual network, you cannot use a managed identity with an Azure Machine Learning compute cluster. To use a managed identity with a compute cluster, use a service endpoint with the Azure Container Registry for the workspace.
+### Azure Monitor
+
+> [!WARNING]
+> Azure Monitor supports using Azure Private Link to connect to a VNet. However, Azure Machine Learning does not support using a private link-enabled Azure Monitor (including Azure Application Insights). Do __not_ configure private link for the Azure Monitor or Azure Application Insights you plan to use with Azure Machine Learning.
+ ## Required public internet access [!INCLUDE [machine-learning-required-public-internet-access](../../includes/machine-learning-public-internet-access.md)]
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments with latest framework versions in Az
### PyTorch
-**Name**: AzureML-pytorch-1.10-ubuntu18.04-py38-cuda11-gpu (**Preview**)
-**Description**: An environment for deep learning with PyTorch containing the AzureML Python SDK and additional python packages.
+**Name**: AzureML-pytorch-1.10-ubuntu18.04-py38-cuda11-gpu
+**Description**: An environment for deep learning with PyTorch containing the AzureML Python SDK and other python packages.
The following Dockerfile can be customized for your personal workflows. ```dockerfile
-FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.1-cudnn8-ubuntu18.04:20211124.v1
+FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.1-cudnn8-ubuntu18.04:20211221.v1
ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/pytorch-1.10
RUN pip install 'matplotlib>=3.3,<3.4' \
'scipy>=1.5,<1.8' \ 'numpy>=1.10,<1.22' \ 'ipykernel~=6.0' \
- 'azureml-core==1.36.0.post2' \
- 'azureml-defaults==1.36.0' \
- 'azureml-mlflow==1.36.0' \
- 'azureml-telemetry==1.36.0' \
+ 'azureml-core==1.37.0.post1' \
+ 'azureml-defaults==1.37.0' \
+ 'azureml-mlflow==1.37.0' \
+ 'azureml-telemetry==1.37.0' \
'tensorboard==2.6.0' \ 'tensorflow-gpu==2.6.0' \ 'onnxruntime-gpu>=1.7,<1.10' \
Other available PyTorch environments:
### LightGBM **Name**: AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu
-**Description**: An environment for machine learning with Scikit-learn, LightGBM, XGBoost, Dask containing the AzureML Python SDK and additional packages.
+**Description**: An environment for machine learning with Scikit-learn, LightGBM, XGBoost, Dask containing the AzureML Python SDK and other packages.
The following Dockerfile can be customized for your personal workflows. ```dockerfile
-FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20211124.v1
+FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20211221.v1
ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/lightgbm
RUN HOROVOD_WITH_TENSORFLOW=1 \
'dask-ml~=1.9.0' \ 'adlfs~=0.7.0' \ 'ipykernel~=6.0' \
- 'azureml-core==1.36.0.post2' \
- 'azureml-defaults==1.36.0' \
- 'azureml-mlflow==1.36.0' \
- 'azureml-telemetry==1.36.0'
+ 'azureml-core==1.37.0.post1' \
+ 'azureml-defaults==1.37.0' \
+ 'azureml-mlflow==1.37.0' \
+ 'azureml-telemetry==1.37.0'
# This is needed for mpi to locate libpython ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
### Sklearn **Name**: AzureML-sklearn-0.24-ubuntu18.04-py37-cuda11-gpu
-**Description**: An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and additional python packages.
+**Description**: An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and other python packages.
The following Dockerfile can be customized for your personal workflows. ```dockerfile
-FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20211124.v1
+FROM mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20211221.v1
ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/sklearn-0.24.1
RUN pip install 'matplotlib>=3.3,<3.4' \
'scipy>=1.5,<1.6' \ 'numpy>=1.10,<1.20' \ 'ipykernel~=6.0' \
- 'azureml-core==1.36.0.post2' \
- 'azureml-defaults==1.36.0' \
- 'azureml-mlflow==1.36.0' \
- 'azureml-telemetry==1.36.0' \
+ 'azureml-core==1.37.0.post1' \
+ 'azureml-defaults==1.37.0' \
+ 'azureml-mlflow==1.37.0' \
+ 'azureml-telemetry==1.37.0' \
'scikit-learn==0.24.1' # This is needed for mpi to locate libpython
ENV LD_LIBRARY_PATH $AZUREML_CONDA_ENVIRONMENT_PATH/lib:$LD_LIBRARY_PATH
### TensorFlow **Name**: AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
-**Description**: An environment for deep learning with TensorFlow containing the AzureML Python SDK and additional python packages.
+**Description**: An environment for deep learning with TensorFlow containing the AzureML Python SDK and other python packages.
The following Dockerfile can be customized for your personal workflows. ```dockerfile
-FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20211124.v1
+FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04:20211221.v1
ENV AZUREML_CONDA_ENVIRONMENT_PATH /azureml-envs/tensorflow-2.4
RUN HOROVOD_WITH_TENSORFLOW=1 \
'scipy>=1.5,<1.6' \ 'numpy>=1.10,<1.20' \ 'ipykernel~=6.0' \
- 'azureml-core==1.36.0.post2' \
- 'azureml-defaults==1.36.0' \
- 'azureml-mlflow==1.36.0' \
- 'azureml-telemetry==1.36.0' \
+ 'azureml-core==1.37.0.post1' \
+ 'azureml-defaults==1.37.0' \
+ 'azureml-mlflow==1.37.0' \
+ 'azureml-telemetry==1.37.0' \
'tensorboard==2.4.0' \ 'tensorflow-gpu==2.4.1' \ 'tensorflow-datasets==4.3.0' \
machine-learning Tutorial Deploy Models With Aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-deploy-models-with-aml.md
- Title: "Image classification tutorial: Deploy models"-
-description: This tutorial shows how to use Azure Machine Learning to deploy an image classification model with scikit-learn in a Python Jupyter notebook.
------- Previously updated : 10/19/2021-
-#Customer intent: As a professional data scientist, I can deploy the model previously trained in tutorial1.
--
-# Tutorial: Deploy an image classification model in Azure Container Instances
--
-This tutorial is **part two of a two-part tutorial series**. In the [previous tutorial](tutorial-train-models-with-aml.md), you trained machine learning models and then registered a model in your workspace on the cloud. Now you're ready to deploy the model as a web service. A web service is an image, in this case a Docker image. It encapsulates the scoring logic and the model itself.
-
-In this part of the tutorial, you use Azure Machine Learning for the following tasks:
-
-> [!div class="checklist"]
-> * Set up your testing environment.
-> * Retrieve the model from your workspace.
-> * Deploy the model to Container Instances.
-> * Test the deployed model.
-
-Container Instances is a great solution for testing and understanding the workflow. For scalable production deployments, consider using Azure Kubernetes Service. For more information, see [how to deploy and where](how-to-deploy-and-where.md).
-
->[!NOTE]
-> Code in this article was tested with Azure Machine Learning SDK version 1.0.83.
-
-## Prerequisites
-
-To run the notebook, first complete the model training in [Tutorial (part 1): Train an image classification model](tutorial-train-models-with-aml.md). Then open the *img-classification-part2-deploy.ipynb* notebook in your cloned *tutorials/image-classification-mnist-data* folder.
-
-This tutorial is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to use it on your own [local environment](how-to-configure-environment.md#local). Make sure you have installed `matplotlib` and `scikit-learn` in your environment.
-
-> [!Important]
-> The rest of this article contains the same content as you see in the notebook.
->
-> Switch to the Jupyter notebook now if you want to read along as you run the code.
-> To run a single code cell in a notebook, click the code cell and hit **Shift+Enter**. Or, run the entire notebook by choosing **Run all** from the top toolbar.
-
-## <a name="start"></a>Set up the environment
-
-Start by setting up a testing environment.
-
-### Import packages
-
-Import the Python packages needed for this tutorial.
--
-```python
-%matplotlib inline
-import numpy as np
-import matplotlib.pyplot as plt
-
-import azureml.core
-
-# Display the core SDK version number
-print("Azure ML SDK Version: ", azureml.core.VERSION)
-```
-
-## Deploy as web service
-
-Deploy the model as a web service hosted in ACI.
-
-To build the correct environment for ACI, provide the following:
-* A scoring script to show how to use the model
-* A configuration file to build the ACI
-* The model you trained before
-
-### Create scoring script
-
-Create the scoring script, called score.py, used by the web service call to show how to use the model.
-
-You must include two required functions into the scoring script:
-* The `init()` function, which typically loads the model into a global object. This function is run only once when the Docker container is started.
-
-* The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats are supported.
-
-```python
-%%writefile score.py
-import json
-import numpy as np
-import os
-import pickle
-import joblib
-
-def init():
- global model
- # AZUREML_MODEL_DIR is an environment variable created during deployment.
- # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION)
- # For multiple models, it points to the folder containing all deployed models (./azureml-models)
- model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_mnist_model.pkl')
- model = joblib.load(model_path)
-
-def run(raw_data):
- data = np.array(json.loads(raw_data)['data'])
- # make prediction
- y_hat = model.predict(data)
- # you can return any data type as long as it is JSON-serializable
- return y_hat.tolist()
-```
-
-### Create configuration file
-
-Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. While it depends on your model, the default of 1 core and 1 gigabyte of RAM is usually sufficient for many models. If you feel you need more later, you would have to recreate the image and redeploy the service.
--
-```python
-from azureml.core.webservice import AciWebservice
-
-aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
- memory_gb=1,
- tags={"data": "MNIST", "method" : "sklearn"},
- description='Predict MNIST with sklearn')
-```
-
-### Deploy in ACI
-Estimated time to complete: **about 2-5 minutes**
-
-Configure the image and deploy. The following code goes through these steps:
-
-1. Create environment object containing dependencies needed by the model using the environment (`tutorial-env`) saved during training.
-1. Create inference configuration necessary to deploy the model as a web service using:
- * The scoring file (`score.py`)
- * environment object created in previous step
-1. Deploy the model to the ACI container.
-1. Get the web service HTTP endpoint.
--
-```python
-%%time
-import uuid
-from azureml.core.webservice import Webservice
-from azureml.core.model import InferenceConfig
-from azureml.core.environment import Environment
-from azureml.core import Workspace
-from azureml.core.model import Model
-
-ws = Workspace.from_config()
-model = Model(ws, 'sklearn_mnist')
--
-myenv = Environment.get(workspace=ws, name="tutorial-env", version="1")
-inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
-
-service_name = 'sklearn-mnist-svc-' + str(uuid.uuid4())[:4]
-service = Model.deploy(workspace=ws,
- name=service_name,
- models=[model],
- inference_config=inference_config,
- deployment_config=aciconfig)
-
-service.wait_for_deployment(show_output=True)
-```
-
-Get the scoring web service's HTTP endpoint, which accepts REST client calls. This endpoint can be shared with anyone who wants to test the web service or integrate it into an application.
--
-```python
-print(service.scoring_uri)
-```
-
-## Test the model
--
-### Download test data
-Download the test data to the **./data/** directory
--
-```python
-import os
-from azureml.core import Dataset
-from azureml.opendatasets import MNIST
-
-data_folder = os.path.join(os.getcwd(), 'data')
-os.makedirs(data_folder, exist_ok=True)
-
-mnist_file_dataset = MNIST.get_file_dataset()
-mnist_file_dataset.download(data_folder, overwrite=True)
-```
-
-### Load test data
-
-Load the test data from the **./data/** directory created during the training tutorial.
--
-```python
-from utils import load_data
-import os
-import glob
-
-data_folder = os.path.join(os.getcwd(), 'data')
-# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster
-X_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-images-idx3-ubyte.gz"), recursive=True)[0], False) / 255.0
-y_test = load_data(glob.glob(os.path.join(data_folder,"**/t10k-labels-idx1-ubyte.gz"), recursive=True)[0], True).reshape(-1)
-```
-
-### Predict test data
-
-Feed the test dataset to the model to get predictions.
--
-The following code goes through these steps:
-1. Send the data as a JSON array to the web service hosted in ACI.
-
-1. Use the SDK's `run` API to invoke the service. You can also make raw calls using any HTTP tool such as curl.
--
-```python
-import json
-test = json.dumps({"data": X_test.tolist()})
-test = bytes(test, encoding='utf8')
-y_hat = service.run(input_data=test)
-```
-
-### Examine the confusion matrix
-
-Generate a confusion matrix to see how many samples from the test set are classified correctly. Notice the mis-classified value for the incorrect predictions.
--
-```python
-from sklearn.metrics import confusion_matrix
-
-conf_mx = confusion_matrix(y_test, y_hat)
-print(conf_mx)
-print('Overall accuracy:', np.average(y_hat == y_test))
-```
-
-The output shows the confusion matrix:
-
-```output
-[[ 960 0 1 2 1 5 6 3 1 1]
- [ 0 1112 3 1 0 1 5 1 12 0]
- [ 9 8 920 20 10 4 10 11 37 3]
- [ 4 0 17 921 2 21 4 12 20 9]
- [ 1 2 5 3 915 0 10 2 6 38]
- [ 10 2 0 41 10 770 17 7 28 7]
- [ 9 3 7 2 6 20 907 1 3 0]
- [ 2 7 22 5 8 1 1 950 5 27]
- [ 10 15 5 21 15 27 7 11 851 12]
- [ 7 8 2 13 32 13 0 24 12 898]]
-Overall accuracy: 0.9204
-```
-
-Use `matplotlib` to display the confusion matrix as a graph. In this graph, the X axis represents the actual values, and the Y axis represents the predicted values. The color in each grid represents the error rate. The lighter the color, the higher the error rate is. For example, many 5's are mis-classified as 3's. So you see a bright grid at (5,3).
-
-```python
-# normalize the diagonal cells so that they don't overpower the rest of the cells when visualized
-row_sums = conf_mx.sum(axis=1, keepdims=True)
-norm_conf_mx = conf_mx / row_sums
-np.fill_diagonal(norm_conf_mx, 0)
-
-fig = plt.figure(figsize=(8,5))
-ax = fig.add_subplot(111)
-cax = ax.matshow(norm_conf_mx, cmap=plt.cm.bone)
-ticks = np.arange(0, 10, 1)
-ax.set_xticks(ticks)
-ax.set_yticks(ticks)
-ax.set_xticklabels(ticks)
-ax.set_yticklabels(ticks)
-fig.colorbar(cax)
-plt.ylabel('true labels', fontsize=14)
-plt.xlabel('predicted values', fontsize=14)
-plt.savefig('conf.png')
-plt.show()
-```
-
-![Chart showing confusion matrix](./media/tutorial-deploy-models-with-aml/confusion.png)
--
-## Show predictions
-
-Test the deployed model with a random sample of 30 images from the test data.
--
-1. Print the returned predictions and plot them along with the input images. Red font and inverse image (white on black) is used to highlight the misclassified samples.
-
- Since the model accuracy is high, you might have to run the following code a few times before you can see a misclassified sample.
---
-```python
-import json
-
-# find 30 random samples from test set
-n = 30
-sample_indices = np.random.permutation(X_test.shape[0])[0:n]
-
-test_samples = json.dumps({"data": X_test[sample_indices].tolist()})
-test_samples = bytes(test_samples, encoding='utf8')
-
-# predict using the deployed model
-result = service.run(input_data=test_samples)
-
-# compare actual value vs. the predicted values:
-i = 0
-plt.figure(figsize = (20, 1))
-
-for s in sample_indices:
- plt.subplot(1, n, i + 1)
- plt.axhline('')
- plt.axvline('')
-
- # use different color for misclassified sample
- font_color = 'red' if y_test[s] != result[i] else 'black'
- clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys
-
- plt.text(x=10, y =-10, s=result[i], fontsize=18, color=font_color)
- plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map)
-
- i = i + 1
-plt.show()
-```
-
-You can also send raw HTTP request to test the web service.
--
-```python
-import requests
-
-# send a random row from the test set to score
-random_index = np.random.randint(0, len(X_test)-1)
-input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}"
-
-headers = {'Content-Type':'application/json'}
-
-# for AKS deployment you'd need to the service key in the header as well
-# api_key = service.get_key()
-# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
-
-resp = requests.post(service.scoring_uri, input_data, headers=headers)
-
-print("POST to url", service.scoring_uri)
-#print("input data:", input_data)
-print("label:", y_test[random_index])
-print("prediction:", resp.text)
-```
-
-## Clean up resources
-
-To keep the resource group and workspace for other tutorials and exploration, you can delete only the ACI deployment using this API call:
--
-```python
-service.delete()
-```
---
-## Next steps
-
-+ Learn about all of the [deployment options for Azure Machine Learning](how-to-deploy-and-where.md).
-+ Learn how to [create clients for the web service](how-to-consume-web-service.md).
-+ [Make predictions on large quantities of data](./tutorial-pipeline-batch-scoring-classification.md) asynchronously.
-+ Monitor your Azure Machine Learning models with [Application Insights](how-to-enable-app-insights.md).
-+ Try out the [automatic algorithm selection](tutorial-auto-train-models.md) tutorial.
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-train-deploy-notebook.md
+
+ Title: "Tutorial: Train and deploy an example in Jupyter Notebook"
+
+description: Use Azure Machine Learning to train and deploy an image classification model with scikit-learn in a cloud-based Python Jupyter Notebook.
++++++ Last updated : 01/05/2022+
+#Customer intent: As a professional data scientist, I can build an image classification model with Azure Machine Learning by using Python in a Jupyter Notebook.
++
+# Tutorial: Train and deploy an image classification model with an example Jupyter Notebook
+
+In this tutorial, you train a machine learning model on remote compute resources. You'll use the training and deployment workflow for Azure Machine Learning in a Python Jupyter Notebook. You can then use the notebook as a template to train your own machine learning model with your own data.
+
+This tutorial trains a simple logistic regression by using the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset and [scikit-learn](https://scikit-learn.org) with Azure Machine Learning. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28 x 28 pixels, representing a number from zero to nine. The goal is to create a multi-class classifier to identify the digit a given image represents.
+
+Learn how to take the following actions:
+
+> [!div class="checklist"]
+> * Download a dataset and look at the data.
+> * Train an image classification model and log metrics using MLflow.
+> * Deploy the model to do real-time inference.
++
+## Prerequisites
+
+* Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to:
+ * Create a workspace.
+ * Create a cloud-based compute instance to use for your development environment.
+
+## <a name="azure"></a>Run a notebook from your workspace
+
+Azure Machine Learning includes a cloud notebook server in your workspace for an install-free and pre-configured experience. Use [your own environment](how-to-configure-environment.md#local) if you prefer to have control over your environment, packages, and dependencies.
+
+ Follow along with this video or use the detailed steps to clone and run the tutorial notebook from your workspace.
+
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4mTUr]
+
+> [!NOTE]
+> The video helps you understand the process, but shows opening a different file. For this tutorial, once you've cloned the **tutorials** folder, use [instructions below to open the cloned notebook](#open).
+
+## <a name="clone"></a> Clone a notebook folder
+
+You complete the following experiment setup and run steps in Azure Machine Learning studio. This consolidated interface includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels.
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
+
+1. Select your subscription and the workspace you created.
+
+1. On the left, select **Notebooks**.
+
+1. At the top, select the **Samples** tab.
+
+1. Open the **Python** folder.
+
+1. Open the folder with a version number on it. This number represents the current release for the Python SDK.
+
+1. Select the **...** button at the right of the **tutorials** folder, and then select **Clone**.
+
+ :::image type="content" source="medi